Why is there a table when I scrape with beautifulSoup, but not pandas

Related searches

Trying to scrape entries on this page into a tab-delimited format (mainly pulling out the sequence and UniProt accession number).

When I run:

url = 'www.signalpeptide.de/index.php?sess=&m=listspdb_bacteria&s=details&id=1000&listname='    
table = pd.read_html(url)
print(table)

I get:

Traceback (most recent call last):
  File "scrape_signalpeptides.py", line 7, in <module>
    table = pd.read_html(url)
  File "/Users/ION/anaconda3/lib/python3.7/site-packages/pandas/io/html.py", line 1094, in read_html
    displayed_only=displayed_only)
  File "/Users/ION/anaconda3/lib/python3.7/site-packages/pandas/io/html.py", line 916, in _parse
    raise_with_traceback(retained)
  File "/Users/ION/anaconda3/lib/python3.7/site-packages/pandas/compat/__init__.py", line 420, in raise_with_traceback
    raise exc.with_traceback(traceback)
ValueError: No tables found

So then I tried the beautiful soup method:

import requests
import pandas as pd
import json
from pandas.io.json import json_normalize
from bs4 import BeautifulSoup

url = 'http://www.signalpeptide.de/index.php?sess=&m=listspdb_bacteria&s=details&id=1000&listname='
res = requests.get(url)
soup = BeautifulSoup(res.content, "lxml")
print(soup)

and I can see there is data there. Does anyone have an idea why can I not parse this page with pandas.read_html? Edit 1: Based on suggestion below I ran this:

from bs4 import BeautifulSoup
import requests
s = requests.session()
s.headers['User-Agent'] = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36'
res = s.get('https://www.signalpeptide.de/index.php?sess=&m=listspdb_bacteria&s=details&id=2&listname=')
print(res)

....I changed the URL to all of www,http and https; and for all I get errors relating to connection errors, e.g.

urllib3.exceptions.NewConnectionError: <urllib3.connection.VerifiedHTTPSConnection object at 0x1114f0898>: Failed to establish a new connection: [Errno 61] Connection refused

urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='www.signalpeptide.de', port=443): Max retries exceeded with url: /index.php?sess=&m=listspdb_bacteria&s=details&id=2&listname= (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x1114f0898>: Failed to establish a new connection: [Errno 61] Connection refused'

ConnectionRefusedError: [Errno 61] Connection refused

Try this:

from bs4 import BeautifulSoup as bs
import requests
import pandas as pd

url = 'http://www.signalpeptide.de/index.php?sess=&m=listspdb_bacteria&s=details&id=1000&listname='    
r = requests.get(url)

tabs = soup.find_all('table')
my_tab = pd.read_html(str(tabs[0]))
my_tab[0].drop(my_tab[0].columns[1], axis=1).drop(my_tab[0].index[0])

This should output the main table on the page beginning with 'id 1000'.

Web Scraping with Pandas and Beautifulsoup, APIs are not always available. Sometimes you have to scrape data from a webpage yourself. Luckily the modules Pandas and Beautifulsoup can help! Related Course: This code will instantly convert the table on the web to an ascii table:� Web Scraping using Beautiful Soup. “Where there is data smoke, there is business fire.” 5.Code to extract the table: Using this BeautifulSoup object, Reading climate.csv into pandas.

The url variable is different between your scripts.

Side by side for comparison:

url = 'www.signalpeptide.de/index.php?sess=&m=listspdb_bacteria&s=details&id=1000&listname=' # pandas
url = 'http://www.signalpeptide.de/index.php?sess=&m=listspdb_bacteria&s=details&id=1000&listname=' # BeautifulSoup

I suspect that the http:// bit is important for pandas to recognize it as a URL as opposed to the HTML itself. After all, pandas.read_html interprets the argument dynamically as described in the documentation

A URL, a file-like object, or a raw string containing HTML. Note that lxml only accepts the http, ftp and file url protocols. If you have a URL that starts with 'https' you might try removing the 's'.

Where specifically the part If you have a URL that starts with 'https' you might try removing the 's' leads me to believe the http:// is important for it to know it is a link as opposed to a "file-like object" or raw HTML.

If the error is exceeding max retries, you probably need to implement a requests.session with headers. A previous code I've done with this looked like:

import requests

s = requests.session()
s.headers['User-Agent'] = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36'

res = s.get('your_url')

At which point you should be able to interpret the res object the same way you would a normal requests.get() object (you can call methods like .text and such). I'm not too sure how the s.headers work, it was just from another SO post that I copied and fixed my script!

update

Part of the error message from your last code block is

ssl.CertificateError: hostname 'www.signalpeptide.de' doesn't match either of 'www.kg13.art', 'www.thpr.net'

Which means their SSL certificate is not valid, and https probably won't work because the host cannot be verified. I adjusted it to http and to show the resulting HTML:

from bs4 import BeautifulSoup
import requests
s = requests.session()
s.headers['User-Agent'] = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36'
res = s.get('http://www.signalpeptide.de/index.php?sess=&m=listspdb_bacteria&s=details&id=2&listname=')
print(res.text)

Results in:

C:\Users\rparkhurst\PycharmProjects\Workspace\venv\Scripts\python.exe C:/Users/rparkhurst/PycharmProjects/Workspace/new_workspace.py
<!doctype html>
<html class="no-js" lang="en">
<head>
 <meta charset="utf-8"/>
 <meta name="viewport" content="width=device-width, initial-scale=1.0"/>
 <title>Signal Peptide Database</title>
 <link rel="stylesheet" href="css/foundation.css">
 <link href='http://cdnjs.cloudflare.com/ajax/libs/foundicons/3.0.0/foundation-icons.css' rel='stylesheet' type='text/css'>
 <link href="css/custom.css" rel="stylesheet" type="text/css">
</head>
<body>
<div class="top-bar">
 <div class="row">
  <div class="top-bar-left">
   <div class="top-bar-title">
    <span data-responsive-toggle="responsive-menu" data-hide-for="medium">
     <span class="menu-icon dark" data-toggle></span>
    </span>
    <a href="./"><img src="img/logo.jpg" alt="logo" id="logo"></a>
   </div>
  </div>
 <div class="top-bar-right">
  <h3 class="hide-for-small">Signal Peptide Website</h3>
  <div id="responsive-menu">
   <ul class="dropdown menu" data-dropdown-menu>
    <li><a href="./?m=myprotein">Search my Protein</a></li>
    <li><a href="./?m=searchspdb">Advanced Search</a></li>
    <li><a href="./?m=listspdb">Database Search</a></li>
    <li><a href="./?m=references">References</a></li>
    <li><a href="./?m=hints">Hints</a></li>
    <li><a href="./?m=links">Links</a></li>
    <li><a href="./?m=imprint">Imprint</a></li>
    </ul>
   </div>
  </div>
 </div>
</div>
<br>
<div class="row columns">
<div class="content">
<span class="headline">Signal Peptide Database - Bacteria</span><br><br>
<form action="index.php" method="post"><input type="hidden" name="sess" value="">
<input type="hidden" name="m" value="listspdb_bacteria">
<input type="hidden" name="id" value="2">
<input type="hidden" name="a" value="save">
<table cellspacing="2" cellpadding="2" border="0">
<tr>
<td colspan="3" class="tabhead">&nbsp;<b>Entry Details</b></td></tr>
<tr height="23">
<td class="highlight">ID</td>
<td class="highlight" width="50">&nbsp;</td>
<td class="highlight">2</td>
</tr>
<tr height="23">
<td class="highlight">Source Database</td>
<td class="highlight" width="50">&nbsp;</td>
<td class="highlight">UniProtKB/Swiss-Prot</td>
</tr>
<tr height="23">
<td class="highlight">UniProtKB/Swiss-Prot Accession Number</td>
<td class="highlight" width="50">&nbsp;</td>
<td class="highlight">A6X5T5&nbsp;&nbsp;&nbsp;&nbsp;(Created: 2009-01-20 Updated: 2009-01-20)</td>
</tr>
<tr height="23">
<td class="highlight">UniProtKB/Swiss-Prot Entry Name</td>
<td class="highlight" width="50">&nbsp;</td>
<td class="highlight"><a target="_new" class="bblack" href="http://www.uniprot.org/uniprot/14KL_OCHA4">14KL_OCHA4</a></td>
</tr>
<tr height="23">
<td class="highlight">Protein Name</td>
<td class="highlight" width="50">&nbsp;</td>
<td class="highlight">Lectin-like protein BA14k</td>
</tr>
<tr height="23">
<td class="highlight">Gene</td>
<td class="highlight" width="50">&nbsp;</td>
<td class="highlight">Oant_3884</td>
</tr>
<tr height="23">
<td class="highlight">Organism Scientific</td>
<td class="highlight" width="50">&nbsp;</td>
<td class="highlight">Ochrobactrum anthropi (strain ATCC 49188 / DSM 6882 / NCTC 12168)</td>
</tr>
<tr height="23">
<td class="highlight">Organism Common</td>
<td class="highlight" width="50">&nbsp;</td>
<td class="highlight"></td>
</tr>
<tr height="23">
<td class="highlight">Lineage</td>
<td class="highlight" width="50">&nbsp;</td>
<td class="highlight">Bacteria<br>&nbsp;&nbsp;Proteobacteria<br>&nbsp;&nbsp;&nbsp;&nbsp;Alphaproteobacteria<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Rhizobiales<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Brucellaceae<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Ochrobactrum<br></td>
</tr>
<tr height="23">
<td class="highlight">Protein Length [aa]</td>
<td class="highlight" width="50">&nbsp;</td>
<td class="highlight">151</td>
</tr>
<tr height="23">
<td class="highlight">Protein Mass [Da]</td>
<td class="highlight" width="50">&nbsp;</td>
<td class="highlight">17666</td>
</tr>
<tr height="23">
<td class="highlight">Features</td>
<td class="highlight" width="50">&nbsp;</td>
<td class="highlight"><table><tr><td><b>Type</b></td><td><b>Description</b></td><td><b>Status</b></td><td><b>Start</b></td><td><b>End</b></td></tr><tr><td class="w"><font color="red">signal peptide</font>&nbsp;&nbsp;&nbsp;</td><td class="w"><font color="red"></font>&nbsp;&nbsp;&nbsp;</td><td class="w"><font color="red">potential</font>&nbsp;&nbsp;&nbsp;</td><td class="w"><font color="red">1</font>&nbsp;&nbsp;&nbsp;</td><td class="w"><font color="red">26</font></td></tr><tr><td class="w"><font color="blue">chain</font>&nbsp;&nbsp;&nbsp;</td><td class="w"><font color="blue">Lectin-like protein BA14k</font>&nbsp;&nbsp;&nbsp;</td><td class="w"><font color="blue"></font>&nbsp;&nbsp;&nbsp;</td><td class="w"><font color="blue">27</font>&nbsp;&nbsp;&nbsp;</td><td class="w"><font color="blue">151</font></td></tr><tr><td class="w"><font color="green">transmembrane region</font>&nbsp;&nbsp;&nbsp;</td><td class="w"><font color="green"></font>&nbsp;&nbsp;&nbsp;</td><td class="w"><font color="green">potential</font>&nbsp;&nbsp;&nbsp;</td><td class="w"><font color="green">83</font>&nbsp;&nbsp;&nbsp;</td><td class="w"><font color="green">103</font></td></tr></table></td>
</tr>
<tr height="23">
<td class="highlight">SP Length</td>
<td class="highlight" width="50">&nbsp;</td>
<td class="highlight">26</td>
</tr>
<tr valign="top">
<td class="highlight"></td><td class="highlight" width="50">&nbsp;</td><td class="highlightfixed">----+----1----+----2----+----3----+----4----+----5</td></tr><tr valign="top">
<td class="highlight">Signal Peptide</td><td class="highlight" width="50">&nbsp;</td><td class="highlightfixed">MNIFKQTCVGAFAVIFGATSIAPTMA</td></tr><tr valign="top">
<td class="highlight">
Sequence</td><td class="highlight" width="50">&nbsp;</td><td class="highlightfixed"><font color="red">MNIFKQTCVGAFAVIFGATSIAPTMA</font><font color="blue">APLNLERPVINHNVEQVRDHRRPP<br>RHYNGHRPHRPGYWNGHRGYRHYRHGYRRYND</font><font color="green">GWWYPLAAFGAGAIIGGA<br>VSQ</font><font color="blue">PRPVYRAPRMSNAHVQWCYNRYKSYRSSDNTFQPYNGPRRQCYSPYS<br>R</td></tr><tr valign="top">
<td class="highlight">
Original</td><td class="highlight" width="50">&nbsp;</td><td class="highlightfixed">MNIFKQTCVGAFAVIFGATSIAPTMAAPLNLERPVINHNVEQVRDHRRPP<br>RHYNGHRPHRPGYWNGHRGYRHYRHGYRRYNDGWWYPLAAFGAGAIIGGA<br>VSQPRPVYRAPRMSNAHVQWCYNRYKSYRSSDNTFQPYNGPRRQCYSPYS<br>R</td></tr><tr valign="top">
<td class="highlight"></td><td class="highlight" width="50">&nbsp;</td><td class="highlightfixed">----+----1----+----2----+----3----+----4----+----5</td></tr><tr height="23">
<td class="highlight">Hydropathies</td>
<td class="highlight" width="50">&nbsp;</td>
<td class="highlight"><a href="./hydropathy/hydropathy.php?id=2" target="_new"><img src="./hydropathy/hydropathy.php?id=2" border="0" width="600"></a></td>
</tr>
<tr>
<td colspan="3" class="nohighlight">&nbsp;</td>
</tr>
<tr>
<td colspan="3" class="tabhead" align="center"><input class="button" type="reset" value="Back" onclick="history.back(-1);"></td>
</tr>
</table>
</form></div>
<hr>
<div class="row">
 <div class="small-4 medium-3 columns"><a href="./">Home</a>&nbsp;&nbsp;&nbsp;<a href="./?m=imprint">Imprint</a></div>
 <div class="small-8 medium-9 columns text-right">
 &copy; 2007-2017 <a href="mailto:kapp@mpi-cbg.de">Katja Kapp</a>, Dresden &amp; <a href="http://www.thpr.net/">thpr.net e. K.</a>, Dresden, Germany, last update 2010-06-11
 </div>
</div><br><br>
<script src="js/vendor/jquery.js"></script>
<script src="js/foundation.js"></script>
<script>
 $(document).foundation();
</script>
</body>
</html>


Process finished with exit code 0

So it seems this solves your issues.

Scrape Tabular Data with Python. How to scrape NBA players' data , How to scrape NBA players' data using BeautifulSoup, Selenium, and If not, you may run into the same problem as I did, with the error message below. So, the table can be extracted and converted to pandas data frame in� It turns out that most sites keep data you’d like to scrape in tables, and so we’re going to learn to parse them. Parsing a Table in BeautifulSoup. To parse the table, we are going to use the Python library BeautifulSoup. It constructs a tree from the HTML and gives you an API to access different elements of the webpage.

If you find table in soup but not while parsing using read_html, the reason could be that specific table is hidden. So you can use below snap:

import bs4
import pandas

# open file available at file_path
with open(file_path, encoding='utf-8') as fobj:
    soup = bs4.BeautifulSoup(fobj, 'html5lib')

# provide your table's class_name
tables = soup.find_all('table', attrs={'class': 'class_name'})


for table in tables:
    filtered_lines = list()
    data_frame = pandas.read_html(str(table), displayed_only=False)

Note: Option displayed_only in read_html will allow you to parse hidden tables also.

Scraping Tabular Data with Pandas | by Jimit Dholakia, BeautifulSoup and Scrapy are the two widely used libraries in Python to perform Web Scraping. However, working with these libraries can cumbersome since we � In my previous post I gave a short script for scraping a particular Wikipedia page for some string-based data in one table. Then the internet had some advice for me. Use pandas.read_html they said. It will be easy, they said; everything will be handled for you, they said. Just clean, analyse and report. Here's my first pass.

Parsing HTML Tables in Python with BeautifulSoup and pandas , That's very helpful for scraping web pages, but in Python it might take a little To parse the table, we are going to use the Python library BeautifulSoup. raise Exception("Column titles do not match the number of columns")� I'm thinking two ways: something like a table_number param that allows one to provide the 0 (or 1)-based index of the table's location on the page starting from the top left and/or passing the attrs dict from BeautifulSoup's methods which allows one to pass a dict of html attributes to use for selecting elements.

Extracting Data from HTML with BeautifulSoup, Data is powerful, but it does not come for free. Gathering BeautifulSoup is one popular library provided by Python to scrape data from the web. To get Note, that these three tables are enclosed in an outer table. To know� There is a lot of data that is getting generated publicly every day, one of the most common types of data is HTML tables, there are are many ways to get this data into your notebook, like using urllib and BeautifulSoup but it's a lengthy and time-consuming process you can do the same task using pandas library in a single line.

Web Scraping using Beautiful Soup Using Jupyter Notebook, you should start by importing the necessary modules (pandas, numpy, matplotlib.pyplot, seaborn). If you don't have Jupyter Notebook installed, I recommend installing it using the Anaconda Python distribution which is available on the internet.

Comments
  • I'm unable to reproduce this issue. pd.read_html('http://www.signalpeptide.de/index.php?sess=&m=listspdb_bacteria&s=details&id=1000&listname=') returns a list of 2 dataframes, of shapes [(42, 103), (20, 5)]
  • Thank you, sorry I was messing around with www, http, and https for both methods to see if it would make a difference, but the pandas method didn't work with any of these prefixes.
  • it was strange that others couldn't reproduce my error, and I couldn't get others solutions to work.I was messing with it, got this: requests.exceptions.ConnectionError: HTTPConnectionPool(host='www.signalpeptide.de', port=80): Max retries exceeded with url: /index.php?sess=&m=listspdb_bacteria&s=details&id=1&listname= (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x11e15ada0>: Failed to establish a new connection: [Errno 61] Connection refused')) I've been looking up solutions, installed pyopenssl; and added 'verify=False' to one of my attempts.
  • So i guess the error is I'm hitting the max number of retries for visiting a URL on the page?
  • @Kela If that is the error, I posted a way to fix it!
  • Thanks so much for your suggestion, I've edited my question to show the exact code I ran based on suggestion in case I've mis-understood, but when I try the code I posted as an edit, with any of www, http, https; I get connection refused. Thank you for your help.