How-To: Get Live Exchange Rates to the Terminal

How-To: Get Live Exchange Rates to the Terminal




Today I wanted to showcase a python script I wrote to scrape a website and retrieve live exchange rates from around the world.  Above you will see the output of the script.

What is the value of this?  Well for one the data is live as soon as the script is run.  Now that the script is created you could feed this data into another function in the script or send it to another program entirely, essentially you can manipulate the data however you want.

A piece that I am not highlighting in this blog entry is the whole process behind finding viable targets to scrape and the process around how to efficiently anlayze the source code.  When you are developing a script to scrape, the code is built around the structure of that site.  You are literally searching through code that someone else has designed.  What happens when their code changes? Even a little?  Your whole scrape script could be broken.  That is why care and attention should be taken when selecting a target.  I will highlight some of my process in a future blog entry.

(You should know that this code was built around a specific target, it will not work on a different target.  The goal here is to give you the right push to take this script and build it around a target of your choosing. )

Lets go through this script and take a look at some of the specific elements that can lead to a good data scrape.

import os
from bs4 import BeautifulSoup
import requests
from prettytable import PrettyTable

if os.name == "nt":
    os.system("cls")
else:
    os.system("clear")


countryGrab = []
compareToUSD = [] #america
compareToEUR = [] #europe
compareToJPY = [] #japan
compareToGBP = [] #Britina
compareToCHF = [] #switzerland
compareToCAD = [] #Canada
compareToAUD = [] #Australia
compareToHKD = [] #hongkong

x = PrettyTable()
x.field_names = ["Country","USD","EUR","JPY","GBP",
"CHF","CAD","AUD","HKD"]


url = "<insert website here>"
page = requests.get(url)
soup = BeautifulSoup(page.content,"html.parser")
results = soup.find("tbody")

rows = results.find_all("tr")
num = 0
for row in rows:
    cell = row.find_all("td")
    
    links = cell[0].find_all('a')
    countryGrab.append(links[1].text)

    compareToUSD.append(cell[1].text)
    compareToEUR.append(cell[2].text)
    compareToJPY.append(cell[3].text)
    compareToGBP.append(cell[4].text)
    compareToCHF.append(cell[5].text)
    compareToCAD.append(cell[6].text)
    compareToAUD.append(cell[7].text)
    compareToHKD.append(cell[8].text)
    
    x.add_row([countryGrab[num],compareToUSD[num],
    compareToEUR[num],compareToJPY[num],
    compareToGBP[num],compareToCHF[num],
    compareToCAD[num],compareToAUD[num],
    compareToHKD[num]])
    num = num +1

print(x)





Let's take a look at a few key pieces in the script:
 import os
from bs4 import BeautifulSoup
import requests
from prettytable import PrettyTable

if os.name == "nt":
    os.system("cls")
else:
    os.system("clear")

There are three key modules at work here. 
1. Bs4 (Scraping module)
2. Requests (module containing functions for manipulating and http)
3. PrettyTable (displaying data in table format)

Bs4/BeautifulSoup is widely used, it allows us to take a grabbed website and cycle through tags to find the content we want.  I will not be going in depth on BeautifulSoup in this blog (saving that for another entry)

Requests, also very popular, this module goes hand in hand BeautifulSoup.  It is used to actually grab the website and makes an object for the BeautifulSoup to sift through.

Prettytable: It's in the name really

Above you also see that I am clearing the screen.  I use a quick and dirty technique to clear the screen.  Using "os.name" we can determine what type of Operating system the script is running on.  Depending on the result the terminal will be cleared.


####################################################
 
countryGrab = []
compareToUSD = [] #america
compareToEUR = [] #europe
compareToJPY = [] #japan
compareToGBP = [] #Britina
compareToCHF = [] #switzerland
compareToCAD = [] #Canada
compareToAUD = [] #Australia
compareToHKD = [] #hongkong

x = PrettyTable()
x.field_names = ["Country","USD","EUR","JPY","GBP",
"CHF","CAD","AUD","HKD"]

Here I am creating some empty lists that are going to get filled up in just a moment

Underneath the empty lists I create an object "x" for the PrettyTable, followed the names for the table headers.  As you can see these are the countries.

####################################################
 url = "<insert website address here"
page = requests.get(url)
soup = BeautifulSoup(page.content,"html.parser")
results = soup.find("tbody")

rows = results.find_all("tr")
num = 0
-url variable set with string of website address
-create page object using requests
-create soup object with content from page
-create results object, at this point we are searching for the table tag called "tbody"
-inside results object, we have stored all the website content that was inside the tags "tbody".  We want to dive deeper inside "tbody".  Now create another object where we search inside "tbody" for all rows "tr".  The new object is called rows


####################################################
 for row in rows:
    cell = row.find_all("td")
    
    links = cell[0].find_all('a')
    countryGrab.append(links[1].text)

    compareToUSD.append(cell[1].text)
    compareToEUR.append(cell[2].text)
    compareToJPY.append(cell[3].text)
    compareToGBP.append(cell[4].text)
    compareToCHF.append(cell[5].text)
    compareToCAD.append(cell[6].text)
    compareToAUD.append(cell[7].text)
    compareToHKD.append(cell[8].text)
    
    x.add_row([countryGrab[num],compareToUSD[num],
    compareToEUR[num],compareToJPY[num],
    compareToGBP[num],compareToCHF[num],
    compareToCAD[num],compareToAUD[num],
    compareToHKD[num]])
    num = num +1

print(x)

-inside our object rows we have a bunch of entries, lots of rows to search through.  Within each row we have cells ("<td></td>)"  The data we want is within the cells_
- when we cycle through each row, we search for all cells based on the tag "td"
- since I spent a bit of time looking through the source code of the site I know which cells contain the data I want.  Here you can see that I am calling on particular cells and appending to the lists that were created earlier.
-Once the lists are filled we are ready to print the whole table!

There you go, a python script that grabs the up-to-date exchange rates from around the world!
A copy of the script is in my "Projects" page.

Thanks!!
Andrew Campbell








Comments

Popular Posts