Introduction to programming using Python

Session 12

Matthieu Choplin


  • Introduction to the web and web scraping

Client Server

Computer talk to each other across the internet

Client requests information, the server is always listening


Language of communication between computers: TCP (sure, no packet lost), UDP (for streaming), IP (to send messages to all participants), HTTP

DNS: big online phone book that will match a domain with an IP address

Ports: to communicate on specific port, first 10000 port are reserved. Common ports used: 8080

Localhost: the IP address on our machine when developing, (not on internet)

HTTP: the language of the web

Client tells server what they want using GET (get info) and POST (modify)

return status code, images, html page...

The HTML language

  • The primary language of information on the internet is the HTML
  • Every webpages are written in HTML
  • To see the source code of the webpage you are currently seeing, do either right click and select "View page Source". Or from the top menu of your browser, click on View and "View Source".



<html><head><meta http-equiv="Content-Type"
content="text/html; charset=windows-1252">
<title>Profile: Aphrodite</title>
<link rel="stylesheet" type="text/css"></head>
<body bgcolor="yellow">
<img src="./Profile_ Aphrodite_files/aphrodite.gif">
<h2>Name: Aphrodite</h2>
Favorite animal: Dove
Favorite color: Red
Hometown: Mount Olympus

Installing Beautifulsoup (1)

Installing Beautifulsoup (2)

Installing Beautifulsoup (3)

Using Beautiful Soup

from bs4 import BeautifulSoup
from urllib.request import urlopen
my_address = "" \
html_page = urlopen(my_address)
html_text ='utf-8')
my_soup = BeautifulSoup(html_text, "html.parser")

BeautifulSoup: get_text()

  • get_text()
    • is extracting only the text from an html document
    • print(my_soup.get_text())
  • there are lot of blank lines left but we can remove them with the method replace()
  • print(my_soup.get_text().replace("\n\n\n",""))
  • Using BeautifulSoup to extract the text first and use the find() method is sometimes easier than to use regular expressions

BeautifulSoup: find_all()

  • find_all()
    • returns a list of all elements of a particular tag given in argument
  • What if the HTML page is broken?

BeautifulSoup: Tags

[<img src="dionysus.jpg"/>, <img src="grapes.png"><br><br>
Hometown: Mount Olympus
Favorite animal: Leopard <br>
Favorite Color: Wine
  • This is not what we were looking for. The <img> is not properly closed therefore BeautifulSoup ends up adding a fair amount of HTML after the image tag before inserting a </img> tag on its own. This can happen with real case.
  • NB: BeautifulSoup is storing HTML tags as Tag objects and we can extract information from each Tag.

BeautifulSoup: Extracting information from Tags

  • Tags:
    • have a name
    • have attributes, accessible using keys, like when we access values of a dictionary through its keys
for tag in my_soup.find_all("img"):

BeautifulSoup: accessing a Tag through its name

  • The HTML is cleaned up
  • We can use the string attributes stored by the title
  • print(my_soup.title.string)

The select method (1)

  • ... will return a list of Tag objects, which is how Beautiful Soup represents an HTML element. The list will contain one Tag object for every match in the BeautifulSoup object's HTML

The select method (2)

Selector passed to the select method Will match...'div') All elements named <div>'#author') The element with an id attribute of author'.notice') All elements that use a CSS'div span') All elements named <span> that are within an element named <div>'div > span') All elements named <span> that are directly within an element named <div>, with no other elements in between'input[name]') All elements named <input> that have a name attribute with any value'input[type="button"]') All elements named <input> that have an attribute name type with value button

Emulating a web browser

  • Sometimes we need to submit information to a web page, like a login page
  • We need a web browser for that
  • MechanicalSoup is an alternative to urllib that can do all the same things but has more added functionality that will allow us to talk back to webpages without using a standalone browser, perfect for fetching web pages, clicking on buttons and links, and filling out and submitting forms

Installing MechanicalSoup

  • You can install it with pip: pip install MechanicalSoup or within Pycharm (like what we did earlier with BeautifulSoup)
  • You might need to restart your IDE for MechanicalSoup to load and be recognised

MechanicalSoup: Opening a web page

  • Create a browser
  • Get a web page which is a Response object
  • Access the HTML content with the soup attribute
  • import mechanicalsoup
    my_browser = mechanicalsoup.Browser(
    page = my_browser.get("" \

MechanicalSoup: Submitting values to a form

  • Have a look at this login page
  • The important section is the login form
  • We can see that there is a submission <form> named "login" that includes two <input> tags, one named username and the other one named password.
  • The third <input> is the actual "Submit" button

MechanicalSoup: script to login

import mechanicalsoup

my_browser = mechanicalsoup.Browser(
login_page = my_browser.get(
login_html = login_page.soup

form ="form")[0]"input")[0]["value"] = "admin""input")[1]["value"] = "default"

profiles_page = my_browser.submit(form, login_page.url)

Methods in MechanicalSoup

  • We created a Browser object
  • We called the method get on the Browser object to get a web page
  • We used the select() method to grab the form and input values in it

Interacting with the Web in Real Time

  • We want to get data from a website that is constantly updated
  • We actually want to simulate clicking on the "refresh" button
  • We can do that with the get method of MechanicalSoup

Use case: fetching the stock quote from Yahoo finance (1)

Use case: fetching the stock quote from Yahoo finance (2)

  • If we look at the source code, we can see what the tag is for the stock and how to retrieve it:
  • <div class="price">40.08</div>
  • We check that <div class="price"> only appears once in the webpage since it will be a way to identify the location of the current price

MechanicalSoup: script to find Yahoo current price

import mechanicalsoup

my_browser = mechanicalsoup.Browser()
page = my_browser.get("")
html_text = page.soup
# return a list of all the tags where
# the css class is 'price'
my_tags =".price")
# take the BeautifulSoup string out of the
# first (and only) <span> tag
my_price = my_tags[0].text
print("The current price of "
      "YHOO is: {}".format(my_price))

Repeatedly get the Yahoo current price

  • Now that we know how to get the price of a stock from the Bloomberg web page, we can create a for loop to stay up to date
  • Note that we should not overload the Bloomberg website with more requests than we need. And also, we should also have a look at their robots.txt file to be sure that what we do is allowed

Introduction to the time.sleep() method

  • The sleep() method of the module time takes a number of seconds as argument and waits for this number of seconds, it enables to delay the execution of a statement in the program
from time import sleep
print "I'm about to wait for five seconds..."
print "Done waiting!"

Repeatedly get the Yahoo current price: script

from time import sleep
import mechanicalsoup
my_browser = mechanicalsoup.Browser()
# obtain 1 stock quote per minute for the next 3 minutes
for i in range(0, 3):
    page = my_browser.get("")
    html_text = page.soup
    # return a list of all the tags where the class is 'price'
    my_tags =".price")
    # take the BeautifulSoup string out of the first tag
    my_price = my_tags[0].text
    print("The current price of YHOO is: {}".format(my_price))
    if i<2: # wait a minute if this isn't the last request

Exercise: putting it all together

  • Install a new library called requests
  • Using the select method of BeautifulSoup, parse (that is, analyze and identify the parts of) the image of the day of
  • Using the get method of the requests library, download the image
  • Complete the following program

Using request

  • You first have to import it
  • import requests
  • If you want to download the webpage, use the get() method with a url in parameter, such as:
  • res = requests.get(url)
  • Stop your program if there is an error with the raise_for_status() method
  • res.raise_for_status()

Solution for Image Downloader


Hide solution

Download the script here: