diff --git a/your-code/Lab_main_Matheus_Freire.ipynb b/your-code/Lab_main_Matheus_Freire.ipynb
new file mode 100644
index 0000000..6ee7998
--- /dev/null
+++ b/your-code/Lab_main_Matheus_Freire.ipynb
@@ -0,0 +1,4794 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Web Scraping Lab\n",
+ "\n",
+ "You will find in this notebook some scrapy exercises to practise your scraping skills.\n",
+ "\n",
+ "**Tips:**\n",
+ "\n",
+ "- Check the response status code for each request to ensure you have obtained the intended content.\n",
+ "- Print the response text in each request to understand the kind of info you are getting and its format.\n",
+ "- Check for patterns in the response text to extract the data/info requested in each question.\n",
+ "- Visit the urls below and take a look at their source code through Chrome DevTools. You'll need to identify the html tags, special class names, etc used in the html content you are expected to extract.\n",
+ "\n",
+ "**Resources**:\n",
+ "- [Requests library](http://docs.python-requests.org/en/master/#the-user-guide)\n",
+ "- [Beautiful Soup Doc](https://www.crummy.com/software/BeautifulSoup/bs4/doc/)\n",
+ "- [Urllib](https://docs.python.org/3/library/urllib.html#module-urllib)\n",
+ "- [re lib](https://docs.python.org/3/library/re.html)\n",
+ "- [lxml lib](https://lxml.de/)\n",
+ "- [Scrapy](https://scrapy.org/)\n",
+ "- [List of HTTP status codes](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes)\n",
+ "- [HTML basics](http://www.simplehtmlguide.com/cheatsheet.php)\n",
+ "- [CSS basics](https://www.cssbasics.com/#page_start)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Below are the libraries and modules you may need. `requests`, `BeautifulSoup` and `pandas` are already imported for you. If you prefer to use additional libraries feel free to do it."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import requests\n",
+ "from bs4 import BeautifulSoup\n",
+ "import pandas as pd"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Download, parse (using BeautifulSoup), and print the content from the Trending Developers page from GitHub:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "200"
+ ]
+ },
+ "execution_count": 2,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# This is the url you will scrape in this exercise\n",
+ "url = 'https://github.com/trending/developers'\n",
+ "response = requests.get(url)\n",
+ "response.status_code"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "\n",
+ "
\n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "Trending developers on GitHub today · GitHub \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ "\n",
+ "\n",
+ "\n",
+ "
\n",
+ "
Skip to content \n",
+ "
\n",
+ " \n",
+ " \n",
+ "\n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
{{ message }}
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "
\n",
+ "\n",
+ " \n",
+ "
\n",
+ "\n",
+ "
\n",
+ "
Trending \n",
+ "
\n",
+ " These are the developers building the hot tools today.\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "
\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " 1\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " Popular repo
\n",
+ "\n",
+ "\n",
+ "⚡ Building applications with LLMs through composability ⚡ \n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ " 2\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " Popular repo
\n",
+ "\n",
+ "\n",
+ " A list of awesome compiler projects and papers for tensor computation and deep learning.\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ " 3\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " Popular repo
\n",
+ "\n",
+ "\n",
+ " A PSP emulator for Android, Windows, Mac and Linux, written in C++. Want to contribute? Join us on Discord at
https://discord.gg/5NJB6dD …\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ " 4\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ " @calcom
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ " 5\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " Popular repo
\n",
+ "\n",
+ "\n",
+ " Full reference of LinkedIn answers 2023 for skill assessments (aws-lambda, rest-api, javascript, react, git, html, jquery, mongodb, java,…\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ " 6\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ " @meetup
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ " 7\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " Popular repo
\n",
+ "\n",
+ "\n",
+ " A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ " 8\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " Popular repo
\n",
+ "\n",
+ "\n",
+ " A Flutter plugin for displaying local notifications on Android, iOS, macOS and Linux\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ " 9\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " Popular repo
\n",
+ "\n",
+ "\n",
+ "🎈 Simple reactive notebooks for Julia\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ " 10\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " Popular repo
\n",
+ "\n",
+ "\n",
+ " LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data.\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ " 11\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " Popular repo
\n",
+ "\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ " 12\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " Popular repo
\n",
+ "\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ " 13\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ " @google
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ " 14\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " Popular repo
\n",
+ "\n",
+ "\n",
+ " An ultra fast (0.0002s read/write), small & encrypted mobile key-value storage framework for React Native written in C++ using JSI\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ " 15\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " Popular repo
\n",
+ "\n",
+ "\n",
+ " GUsb is a GObject wrapper for libusb1\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ " 16\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " Popular repo
\n",
+ "\n",
+ "\n",
+ " Rust programs written entirely in Rust\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ " 17\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " Popular repo
\n",
+ "\n",
+ "\n",
+ " jq for binary formats - tool, language and decoders for working with binary and text formats\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ " 18\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " Popular repo
\n",
+ "\n",
+ "\n",
+ " Let us control diffusion models!\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ " 19\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " Popular repo
\n",
+ "\n",
+ "\n",
+ " Extended precision integer C++ library\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ " 20\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " Popular repo
\n",
+ "\n",
+ "\n",
+ " Good first issues for GSoC 2023\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ " 21\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " Popular repo
\n",
+ "\n",
+ "\n",
+ " Rust bindings for the C++ api of PyTorch.\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ " 22\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " Popular repo
\n",
+ "\n",
+ "\n",
+ " GoReplay is an open-source tool for capturing and replaying live HTTP traffic into a test environment in order to continuously test your …\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ " 23\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " Popular repo
\n",
+ "\n",
+ "\n",
+ " Demonstrating the common patterns when using React, Redux v4, and TypeScript.\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ " 24\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " Popular repo
\n",
+ "\n",
+ "\n",
+ " Find undefined and unused variables with the PHP Codesniffer static analysis tool.\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ " 25\n",
+ " \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " Popular repo
\n",
+ "\n",
+ "\n",
+ " An example Rust web application with a focus on module structure\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Follow \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ "
\n",
+ "
\n",
+ " \n",
+ " \n",
+ "
\n",
+ "\n",
+ " \n",
+ " \n",
+ " \n",
+ " You can’t perform that action at this time.\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ " \n",
+ "
You signed in with another tab or window. Reload to refresh your session. \n",
+ "
You signed out in another tab or window. Reload to refresh your session. \n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ "\n",
+ "\n",
+ "\n",
+ " \n",
+ " \n",
+ " \n",
+ "
\n",
+ " \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
\n",
+ "\n",
+ "
\n",
+ "\n",
+ " \n",
+ " \n",
+ "\n",
+ " \n",
+ " \n",
+ " \n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "\n",
+ "\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "soup = BeautifulSoup(response.content)\n",
+ "print(soup)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### 1. Display the names of the trending developers retrieved in the previous step.\n",
+ "\n",
+ "Your output should be a Python list of developer names. Each name should not contain any html tag.\n",
+ "\n",
+ "**Instructions:**\n",
+ "\n",
+ "1. Find out the html tag and class names used for the developer names. You can achieve this using Chrome DevTools or clicking in 'Inspect' on any browser. Here is an example:\n",
+ "\n",
+ "\n",
+ "\n",
+ "2. Use BeautifulSoup `find_all()` to extract all the html elements that contain the developer names. Hint: pass in the `attrs` parameter to specify the class.\n",
+ "\n",
+ "3. Loop through the elements found and get the text for each of them.\n",
+ "\n",
+ "4. While you are at it, use string manipulation techniques to replace whitespaces and linebreaks (i.e. `\\n`) in the *text* of each html element. Use a list to store the clean names. Hint: you may also use `.get_text()` instead of `.text` and pass in the desired parameters to do some string manipulation (check the documentation).\n",
+ "\n",
+ "5. Print the list of names.\n",
+ "\n",
+ "Your output should look like below:\n",
+ "\n",
+ "```\n",
+ "['trimstray (@trimstray)',\n",
+ " 'joewalnes (JoeWalnes)',\n",
+ " 'charlax (Charles-AxelDein)',\n",
+ " 'ForrestKnight (ForrestKnight)',\n",
+ " 'revery-ui (revery-ui)',\n",
+ " 'alibaba (Alibaba)',\n",
+ " 'Microsoft (Microsoft)',\n",
+ " 'github (GitHub)',\n",
+ " 'facebook (Facebook)',\n",
+ " 'boazsegev (Bo)',\n",
+ " 'google (Google)',\n",
+ " 'cloudfetch',\n",
+ " 'sindresorhus (SindreSorhus)',\n",
+ " 'tensorflow',\n",
+ " 'apache (TheApacheSoftwareFoundation)',\n",
+ " 'DevonCrawford (DevonCrawford)',\n",
+ " 'ARMmbed (ArmMbed)',\n",
+ " 'vuejs (vuejs)',\n",
+ " 'fastai (fast.ai)',\n",
+ " 'QiShaoXuan (Qi)',\n",
+ " 'joelparkerhenderson (JoelParkerHenderson)',\n",
+ " 'torvalds (LinusTorvalds)',\n",
+ " 'CyC2018',\n",
+ " 'komeiji-satori (神楽坂覚々)',\n",
+ " 'script-8']\n",
+ " ```"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "hwchase17\n",
+ "merrymercy\n",
+ "hrydgard\n",
+ "PeerRich\n",
+ "Ebazhanov\n",
+ "chenrui333\n",
+ "MaikuB\n",
+ "fonsp\n",
+ "jerryjliu\n",
+ "mattleibow\n",
+ "bluwy\n",
+ "sbc100\n",
+ "ammarahm-ed\n",
+ "hughsie\n",
+ "sunfishcode\n",
+ "wader\n",
+ "chfast\n",
+ "hkirat\n",
+ "LaurentMazare\n",
+ "buger\n",
+ "resir014\n",
+ "sirbrillig\n",
+ "KodrAus\n"
+ ]
+ }
+ ],
+ "source": [
+ "box = soup.find_all('div', attrs = {\"class\":\"Box\"})\n",
+ "\n",
+ "article = box[1].find_all('a', attrs = {\"class\":\"Link--secondary\"})\n",
+ "\n",
+ "for profile in article: \n",
+ " print(profile.get_text().replace(\"\\n\", \"\").strip())"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### 1.1. Display the trending Python repositories in GitHub.\n",
+ "\n",
+ "The steps to solve this problem is similar to the previous one except that you need to find out the repository names instead of developer names."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 18,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# This is the url you will scrape in this exercise\n",
+ "url = 'https://github.com/trending/python?since=daily'"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 19,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "AIGC-Audio / AudioGPT\n",
+ "deep-floyd / IF\n",
+ "nlpxucan / WizardLM\n",
+ "UX-Decoder / Segment-Everything-Everywhere-All-At-Once\n",
+ "lamini-ai / lamini\n",
+ "xtekky / gpt4free\n",
+ "gventuri / pandas-ai\n",
+ "ZrrSkywalker / LLaMA-Adapter\n",
+ "pytube / pytube\n",
+ "Rapptz / discord.py\n",
+ "goauthentik / authentik\n",
+ "deforum-art / deforum-stable-diffusion\n",
+ "spotDL / spotify-downloader\n",
+ "lm-sys / FastChat\n",
+ "litanlitudan / skyagi\n",
+ "isaiahbjork / Auto-GPT-Crypto-Plugin\n",
+ "farizrahman4u / loopgpt\n",
+ "declare-lab / tango\n",
+ "donnemartin / system-design-primer\n",
+ "alaeddine-13 / thinkgpt\n",
+ "X-PLUG / mPLUG-Owl\n",
+ "mikumifa / biliTickerBuy\n",
+ "ytdl-org / youtube-dl\n",
+ "Nriver / trilium-translation\n",
+ "itamargol / openai\n"
+ ]
+ }
+ ],
+ "source": [
+ "response = requests.get(url)\n",
+ "response\n",
+ "\n",
+ "soup = BeautifulSoup(response.content)\n",
+ "trending = soup.find_all(\"h2\", attrs = {\"class\": \"h3 lh-condensed\"})\n",
+ "trending\n",
+ "for repo in trending: \n",
+ " print(repo.get_text().replace(\"\\n\", \"\").strip())"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### 2. Display all the image links from Walt Disney wikipedia page.\n",
+ "Hint: use `.get()` to access information inside tags. Check out the documentation."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# This is the url you will scrape in this exercise\n",
+ "url = 'https://en.wikipedia.org/wiki/Walt_Disney'"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 26,
+ "metadata": {},
+ "outputs": [
+ {
+ "ename": "IndexError",
+ "evalue": "list index out of range",
+ "output_type": "error",
+ "traceback": [
+ "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
+ "\u001b[1;31mIndexError\u001b[0m Traceback (most recent call last)",
+ "\u001b[1;32m~\\AppData\\Local\\Temp\\ipykernel_9372\\286651990.py\u001b[0m in \u001b[0;36m\u001b[1;34m\u001b[0m\n\u001b[0;32m 4\u001b[0m \u001b[0msoup\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mBeautifulSoup\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mresponse\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mcontent\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 5\u001b[0m \u001b[0msoup1\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0msoup\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mfind_all\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"td\"\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mattrs\u001b[0m \u001b[1;33m=\u001b[0m \u001b[1;33m{\u001b[0m\u001b[1;34m\"class\"\u001b[0m\u001b[1;33m:\u001b[0m \u001b[1;34m\"infobox-image\"\u001b[0m\u001b[1;33m}\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m----> 6\u001b[1;33m \u001b[0mphoto\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0msoup1\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;36m0\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mfind_all\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"img\"\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 7\u001b[0m \u001b[1;32mfor\u001b[0m \u001b[0mimg\u001b[0m \u001b[1;32min\u001b[0m \u001b[0mphoto\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 8\u001b[0m \u001b[0mprint\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mimg\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mget\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"src\"\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
+ "\u001b[1;31mIndexError\u001b[0m: list index out of range"
+ ]
+ }
+ ],
+ "source": [
+ "response = requests.get(url)\n",
+ "response\n",
+ "\n",
+ "soup = BeautifulSoup(response.content)\n",
+ "soup1 = soup.find_all(\"td\", attrs = {\"class\": \"infobox-image\"})\n",
+ "photo = soup1[0].find_all(\"img\")\n",
+ "for img in photo:\n",
+ " print(img.get(\"src\"))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### 2.1. List all language names and number of related articles in the order they appear in wikipedia.org."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 27,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# This is the url you will scrape in this exercise\n",
+ "url = 'https://www.wikipedia.org/'"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 40,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "English 6 644 000\n",
+ "Русский 1 909 000\n",
+ "日本語 1 370 000\n",
+ "Deutsch 2 792 000\n",
+ "Español 1 854 000\n",
+ "Français 2 514 000\n",
+ "Italiano 1 806 000\n",
+ "中文 1 347 000\n",
+ "فارسی فارسی\n",
+ "Português 1 101 000\n"
+ ]
+ },
+ {
+ "data": {
+ "text/plain": [
+ "'Português'"
+ ]
+ },
+ "execution_count": 40,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "response = requests.get(url)\n",
+ "response\n",
+ "\n",
+ "soup = BeautifulSoup(response.content)\n",
+ "n_language = soup.find_all(\"div\", attrs = {\"class\": \"central-featured-lang\"})\n",
+ "for i in n_language:\n",
+ " n_language = i.find(\"strong\").get_text().strip()\n",
+ " n_article = i.find(\"bdi\").text.strip().split(\"+\")[0]\n",
+ " print(n_language, n_article)\n",
+ "n_language"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### 2.2. Display the top 10 languages by number of native speakers stored in a pandas dataframe.\n",
+ "Hint: After finding the correct table you want to analyse, you can use a nested **for** loop to find the elements row by row (check out the 'td' and 'tr' tags). An easier way to do it is using pd.read_html(), check out documentation [here](https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.read_html.html)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# This is the url you will scrape in this exercise\n",
+ "url = 'https://en.wikipedia.org/wiki/List_of_languages_by_number_of_native_speakers'"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 43,
+ "metadata": {},
+ "outputs": [
+ {
+ "ename": "ValueError",
+ "evalue": "No tables found",
+ "output_type": "error",
+ "traceback": [
+ "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
+ "\u001b[1;31mValueError\u001b[0m Traceback (most recent call last)",
+ "\u001b[1;32m~\\AppData\\Local\\Temp\\ipykernel_9372\\3772482250.py\u001b[0m in \u001b[0;36m\u001b[1;34m\u001b[0m\n\u001b[1;32m----> 1\u001b[1;33m \u001b[0mpd\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mread_html\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0murl\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 2\u001b[0m \u001b[0mtables\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mpd\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mread_html\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0murl\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 3\u001b[0m \u001b[0mtables\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
+ "\u001b[1;32m~\\anaconda3\\lib\\site-packages\\pandas\\util\\_decorators.py\u001b[0m in \u001b[0;36mwrapper\u001b[1;34m(*args, **kwargs)\u001b[0m\n\u001b[0;32m 309\u001b[0m \u001b[0mstacklevel\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mstacklevel\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 310\u001b[0m )\n\u001b[1;32m--> 311\u001b[1;33m \u001b[1;32mreturn\u001b[0m \u001b[0mfunc\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m*\u001b[0m\u001b[0margs\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 312\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 313\u001b[0m \u001b[1;32mreturn\u001b[0m \u001b[0mwrapper\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
+ "\u001b[1;32m~\\anaconda3\\lib\\site-packages\\pandas\\io\\html.py\u001b[0m in \u001b[0;36mread_html\u001b[1;34m(io, match, flavor, header, index_col, skiprows, attrs, parse_dates, thousands, encoding, decimal, converters, na_values, keep_default_na, displayed_only)\u001b[0m\n\u001b[0;32m 1111\u001b[0m \u001b[0mio\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mstringify_path\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mio\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 1112\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m-> 1113\u001b[1;33m return _parse(\n\u001b[0m\u001b[0;32m 1114\u001b[0m \u001b[0mflavor\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mflavor\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 1115\u001b[0m \u001b[0mio\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mio\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
+ "\u001b[1;32m~\\anaconda3\\lib\\site-packages\\pandas\\io\\html.py\u001b[0m in \u001b[0;36m_parse\u001b[1;34m(flavor, io, match, attrs, encoding, displayed_only, **kwargs)\u001b[0m\n\u001b[0;32m 937\u001b[0m \u001b[1;32melse\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 938\u001b[0m \u001b[1;32massert\u001b[0m \u001b[0mretained\u001b[0m \u001b[1;32mis\u001b[0m \u001b[1;32mnot\u001b[0m \u001b[1;32mNone\u001b[0m \u001b[1;31m# for mypy\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 939\u001b[1;33m \u001b[1;32mraise\u001b[0m \u001b[0mretained\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 940\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 941\u001b[0m \u001b[0mret\u001b[0m \u001b[1;33m=\u001b[0m \u001b[1;33m[\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
+ "\u001b[1;32m~\\anaconda3\\lib\\site-packages\\pandas\\io\\html.py\u001b[0m in \u001b[0;36m_parse\u001b[1;34m(flavor, io, match, attrs, encoding, displayed_only, **kwargs)\u001b[0m\n\u001b[0;32m 917\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 918\u001b[0m \u001b[1;32mtry\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 919\u001b[1;33m \u001b[0mtables\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mp\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mparse_tables\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 920\u001b[0m \u001b[1;32mexcept\u001b[0m \u001b[0mValueError\u001b[0m \u001b[1;32mas\u001b[0m \u001b[0mcaught\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 921\u001b[0m \u001b[1;31m# if `io` is an io-like object, check if it's seekable\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
+ "\u001b[1;32m~\\anaconda3\\lib\\site-packages\\pandas\\io\\html.py\u001b[0m in \u001b[0;36mparse_tables\u001b[1;34m(self)\u001b[0m\n\u001b[0;32m 237\u001b[0m \u001b[0mlist\u001b[0m \u001b[0mof\u001b[0m \u001b[0mparsed\u001b[0m \u001b[1;33m(\u001b[0m\u001b[0mheader\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mbody\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mfooter\u001b[0m\u001b[1;33m)\u001b[0m \u001b[0mtuples\u001b[0m \u001b[1;32mfrom\u001b[0m \u001b[0mtables\u001b[0m\u001b[1;33m.\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 238\u001b[0m \"\"\"\n\u001b[1;32m--> 239\u001b[1;33m \u001b[0mtables\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_parse_tables\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_build_doc\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mmatch\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mattrs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 240\u001b[0m \u001b[1;32mreturn\u001b[0m \u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_parse_thead_tbody_tfoot\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mtable\u001b[0m\u001b[1;33m)\u001b[0m \u001b[1;32mfor\u001b[0m \u001b[0mtable\u001b[0m \u001b[1;32min\u001b[0m \u001b[0mtables\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 241\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n",
+ "\u001b[1;32m~\\anaconda3\\lib\\site-packages\\pandas\\io\\html.py\u001b[0m in \u001b[0;36m_parse_tables\u001b[1;34m(self, doc, match, attrs)\u001b[0m\n\u001b[0;32m 567\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 568\u001b[0m \u001b[1;32mif\u001b[0m \u001b[1;32mnot\u001b[0m \u001b[0mtables\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 569\u001b[1;33m \u001b[1;32mraise\u001b[0m \u001b[0mValueError\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"No tables found\"\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 570\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 571\u001b[0m \u001b[0mresult\u001b[0m \u001b[1;33m=\u001b[0m \u001b[1;33m[\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
+ "\u001b[1;31mValueError\u001b[0m: No tables found"
+ ]
+ }
+ ],
+ "source": [
+ "pd.read_html(url)\n",
+ "tables = pd.read_html(url)\n",
+ "tables"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### 3. Display IMDB's top 250 data (movie name, initial release, director name and stars) as a pandas dataframe.\n",
+ "Hint: If you hover over the title of the movie, you should see the director's name. Can you find where it's stored in the html?"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# This is the url you will scrape in this exercise \n",
+ "url = 'https://www.imdb.com/chart/top'"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# your code here"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### 3.1. Display the movie name, year and a brief summary of the top 10 random movies (IMDB) as a pandas dataframe."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "#This is the url you will scrape in this exercise\n",
+ "url = 'https://www.imdb.com/list/ls009796553/'"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# your code here"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Bonus"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Find the live weather report (temperature, wind speed, description and weather) of a given city."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "#https://openweathermap.org/current\n",
+ "city = input('Enter the city: ')\n",
+ "url = 'http://api.openweathermap.org/data/2.5/weather?'+'q='+city+'&APPID=b35975e18dc93725acb092f7272cc6b8&units=metric'"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# your code here"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Find the book name, price and stock availability as a pandas dataframe."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# This is the url you will scrape in this exercise. \n",
+ "# It is a fictional bookstore created to be scraped. \n",
+ "url = 'http://books.toscrape.com/'"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# your code here"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Display the 100 latest earthquakes info (date, time, latitude, longitude and region name) by the EMSC as a pandas dataframe.\n",
+ "***Hint:*** Here the displayed number of earthquakes per page is 20, but you can easily move to the next page by looping through the desired number of pages and adding it to the end of the url."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# This is the url you will scrape in this exercise\n",
+ "url = 'https://www.emsc-csem.org/Earthquake/?view='\n",
+ "\n",
+ "# This is how you will loop through each page:\n",
+ "number_of_pages = int(100/20)\n",
+ "each_page_urls = []\n",
+ "\n",
+ "for n in range(1, number_of_pages+1):\n",
+ " link = url+str(n)\n",
+ " each_page_urls.append(link)\n",
+ " \n",
+ "each_page_urls"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# your code here"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/your-code/main.ipynb b/your-code/main.ipynb
deleted file mode 100755
index 1fe9046..0000000
--- a/your-code/main.ipynb
+++ /dev/null
@@ -1,417 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Web Scraping Lab\n",
- "\n",
- "You will find in this notebook some scrapy exercises to practise your scraping skills.\n",
- "\n",
- "**Tips:**\n",
- "\n",
- "- Check the response status code for each request to ensure you have obtained the intended content.\n",
- "- Print the response text in each request to understand the kind of info you are getting and its format.\n",
- "- Check for patterns in the response text to extract the data/info requested in each question.\n",
- "- Visit the urls below and take a look at their source code through Chrome DevTools. You'll need to identify the html tags, special class names, etc used in the html content you are expected to extract.\n",
- "\n",
- "**Resources**:\n",
- "- [Requests library](http://docs.python-requests.org/en/master/#the-user-guide)\n",
- "- [Beautiful Soup Doc](https://www.crummy.com/software/BeautifulSoup/bs4/doc/)\n",
- "- [Urllib](https://docs.python.org/3/library/urllib.html#module-urllib)\n",
- "- [re lib](https://docs.python.org/3/library/re.html)\n",
- "- [lxml lib](https://lxml.de/)\n",
- "- [Scrapy](https://scrapy.org/)\n",
- "- [List of HTTP status codes](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes)\n",
- "- [HTML basics](http://www.simplehtmlguide.com/cheatsheet.php)\n",
- "- [CSS basics](https://www.cssbasics.com/#page_start)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Below are the libraries and modules you may need. `requests`, `BeautifulSoup` and `pandas` are already imported for you. If you prefer to use additional libraries feel free to do it."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "import requests\n",
- "from bs4 import BeautifulSoup\n",
- "import pandas as pd"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Download, parse (using BeautifulSoup), and print the content from the Trending Developers page from GitHub:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# This is the url you will scrape in this exercise\n",
- "url = 'https://github.com/trending/developers'"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# your code here"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### 1. Display the names of the trending developers retrieved in the previous step.\n",
- "\n",
- "Your output should be a Python list of developer names. Each name should not contain any html tag.\n",
- "\n",
- "**Instructions:**\n",
- "\n",
- "1. Find out the html tag and class names used for the developer names. You can achieve this using Chrome DevTools or clicking in 'Inspect' on any browser. Here is an example:\n",
- "\n",
- "\n",
- "\n",
- "2. Use BeautifulSoup `find_all()` to extract all the html elements that contain the developer names. Hint: pass in the `attrs` parameter to specify the class.\n",
- "\n",
- "3. Loop through the elements found and get the text for each of them.\n",
- "\n",
- "4. While you are at it, use string manipulation techniques to replace whitespaces and linebreaks (i.e. `\\n`) in the *text* of each html element. Use a list to store the clean names. Hint: you may also use `.get_text()` instead of `.text` and pass in the desired parameters to do some string manipulation (check the documentation).\n",
- "\n",
- "5. Print the list of names.\n",
- "\n",
- "Your output should look like below:\n",
- "\n",
- "```\n",
- "['trimstray (@trimstray)',\n",
- " 'joewalnes (JoeWalnes)',\n",
- " 'charlax (Charles-AxelDein)',\n",
- " 'ForrestKnight (ForrestKnight)',\n",
- " 'revery-ui (revery-ui)',\n",
- " 'alibaba (Alibaba)',\n",
- " 'Microsoft (Microsoft)',\n",
- " 'github (GitHub)',\n",
- " 'facebook (Facebook)',\n",
- " 'boazsegev (Bo)',\n",
- " 'google (Google)',\n",
- " 'cloudfetch',\n",
- " 'sindresorhus (SindreSorhus)',\n",
- " 'tensorflow',\n",
- " 'apache (TheApacheSoftwareFoundation)',\n",
- " 'DevonCrawford (DevonCrawford)',\n",
- " 'ARMmbed (ArmMbed)',\n",
- " 'vuejs (vuejs)',\n",
- " 'fastai (fast.ai)',\n",
- " 'QiShaoXuan (Qi)',\n",
- " 'joelparkerhenderson (JoelParkerHenderson)',\n",
- " 'torvalds (LinusTorvalds)',\n",
- " 'CyC2018',\n",
- " 'komeiji-satori (神楽坂覚々)',\n",
- " 'script-8']\n",
- " ```"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# your code here"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### 1.1. Display the trending Python repositories in GitHub.\n",
- "\n",
- "The steps to solve this problem is similar to the previous one except that you need to find out the repository names instead of developer names."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# This is the url you will scrape in this exercise\n",
- "url = 'https://github.com/trending/python?since=daily'"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# your code here"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### 2. Display all the image links from Walt Disney wikipedia page.\n",
- "Hint: use `.get()` to access information inside tags. Check out the documentation."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# This is the url you will scrape in this exercise\n",
- "url = 'https://en.wikipedia.org/wiki/Walt_Disney'"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# your code here"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### 2.1. List all language names and number of related articles in the order they appear in wikipedia.org."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# This is the url you will scrape in this exercise\n",
- "url = 'https://www.wikipedia.org/'"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# your code here"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### 2.2. Display the top 10 languages by number of native speakers stored in a pandas dataframe.\n",
- "Hint: After finding the correct table you want to analyse, you can use a nested **for** loop to find the elements row by row (check out the 'td' and 'tr' tags). An easier way to do it is using pd.read_html(), check out documentation [here](https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.read_html.html)."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# This is the url you will scrape in this exercise\n",
- "url = 'https://en.wikipedia.org/wiki/List_of_languages_by_number_of_native_speakers'"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# your code here"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### 3. Display IMDB's top 250 data (movie name, initial release, director name and stars) as a pandas dataframe.\n",
- "Hint: If you hover over the title of the movie, you should see the director's name. Can you find where it's stored in the html?"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# This is the url you will scrape in this exercise \n",
- "url = 'https://www.imdb.com/chart/top'"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# your code here"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### 3.1. Display the movie name, year and a brief summary of the top 10 random movies (IMDB) as a pandas dataframe."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "#This is the url you will scrape in this exercise\n",
- "url = 'https://www.imdb.com/list/ls009796553/'"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# your code here"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Bonus"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Find the live weather report (temperature, wind speed, description and weather) of a given city."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "#https://openweathermap.org/current\n",
- "city = input('Enter the city: ')\n",
- "url = 'http://api.openweathermap.org/data/2.5/weather?'+'q='+city+'&APPID=b35975e18dc93725acb092f7272cc6b8&units=metric'"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# your code here"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Find the book name, price and stock availability as a pandas dataframe."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# This is the url you will scrape in this exercise. \n",
- "# It is a fictional bookstore created to be scraped. \n",
- "url = 'http://books.toscrape.com/'"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# your code here"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Display the 100 latest earthquakes info (date, time, latitude, longitude and region name) by the EMSC as a pandas dataframe.\n",
- "***Hint:*** Here the displayed number of earthquakes per page is 20, but you can easily move to the next page by looping through the desired number of pages and adding it to the end of the url."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# This is the url you will scrape in this exercise\n",
- "url = 'https://www.emsc-csem.org/Earthquake/?view='\n",
- "\n",
- "# This is how you will loop through each page:\n",
- "number_of_pages = int(100/20)\n",
- "each_page_urls = []\n",
- "\n",
- "for n in range(1, number_of_pages+1):\n",
- " link = url+str(n)\n",
- " each_page_urls.append(link)\n",
- " \n",
- "each_page_urls"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# your code here"
- ]
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.7.7"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 2
-}