Exporting Python Scraping Results to CSV

The code below is yielding a value for the "resultStats" ID, which I would like to save in a CSV file. Is there any smart way to have the " desired_google_queries" (i.e. the search terms) in column A and the "resultStats" values in column B of the CSV?

I saw that there are a number of threads on this topic but none of the solutions I have read through worked for the specific situation.

from bs4 import BeautifulSoup
import urllib.request
import csv

    desired_google_queries = ['Elon Musk' , 'Tesla', 'Microsoft']

for query in desired_google_queries:

    url = 'http://google.com/search?q=' + query

    req = urllib.request.Request(url, headers={'User-Agent' : "Magic Browser"})
    response = urllib.request.urlopen( req )
    html = response.read()

    soup = BeautifulSoup(html, 'html.parser')

    resultStats = soup.find(id="resultStats").string
    print(resultStats)

I took the liberty of rewriting this to use the Requests library instead of urllib, but this shows how to do the CSV writing which is what I think you were more interested in:

from bs4 import BeautifulSoup
import requests
import csv

desired_google_queries = ['Elon Musk' , 'Tesla', 'Microsoft']
result_stats = dict()

for query in desired_google_queries:
    url = 'http://google.com/search?q=' + query
    response = requests.get(url)
    html = response.text
    soup = BeautifulSoup(html, 'html.parser')
    result_stats[query] = soup.find(id="resultStats").string

with open ('searchstats.csv', 'w', newline='') as fout:
    cw = csv.writer(fout)
    for q in desired_google_queries:
        cw.writerow([q, result_stats[q]])

Exporting Python Scraping Results to CSV. Ask Question Asked 9 months ago. Active 9 months ago. Viewed 243 times 1. The code below is yielding a value for the

instead of writing it line by line, you can write it all in one go by storing the result in a pandas dataframe first. See below code

from bs4 import BeautifulSoup
import urllib.request
import pandas as pd

data_dict = {'desired_google_queries': [],
             'resultStats': []}

desired_google_queries = ['Elon Musk' , 'Tesla', 'Microsoft']

for query in desired_google_queries:

    url = 'http://google.com/search?q=' + query

    req = urllib.request.Request(url, headers={'User-Agent' : "Magic Browser"})
    response = urllib.request.urlopen( req )
    html = response.read()

    soup = BeautifulSoup(html, 'html.parser')

    resultStats = soup.find(id="resultStats").string

    data_dict['desired_google_queries'].append(query)
    data_dict['resultStats'].append(resultStats)

df = pd.DataFrame(data=data_dict)
df.to_csv(path_or_buf='path/where/you/want/to/save/thisfile.csv', index=None)

Note: When scraping jumps data the year you enter is when the season started, i.e to get 2019 Cheltenham Festival data, you would use the year 2018. [rpscrape]> 11 2018 jumps In the above example, Cheltenham races from the season 2018-2019 are scraped, the 2018 Greatwood and the 2019 festival will be included but not the 2018 festival.

The original answer has been deleted unfortunately - please find below the code for everyone else interested in the situation. Thanks to the user who has posted the solution in the first place:

with open('eggs.csv', 'w', newline='') as csvfile:
    spamwriter = csv.writer(csvfile, delimiter=' ',
                        quotechar='|', quoting=csv.QUOTE_MINIMAL)
    spamwriter.writerow(['query', 'resultStats'])
    for query in desired_google_queries:
        ...
        spamwriter.writerow([query, resultStats])

Exporting the DataFrame as a CSV file. dframe.to_csv(“file_name.csv”) Dataset creation and cleaning: Web Scraping using Python — Part 1. Karan Bhanot in Towards Data Science.

In this tutorial we do some web scraping with Python and Beautiful Soup 4. The results are then saved to a CSV file which can be opened and analyzed in Microsoft Excel or another spreadsheet program. I show you how to select elements from the page, deal with 403 Forbidden errors by faking your user … Continue reading "BeautifulSoup 4 Python Web Scraping to CSV Excel File"

Dear Experts, I have the following Python code which predicts result on the iris dataset in the frame of machine learning. # -*- coding: utf-8 -*-# Load libraries import pandas from pandas.tools.plotting import scatter_matrix import matplotlib.pyplot as plt from sklearn import model_selection from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from

Open up a new Python file and follow along, let's import the libraries: import requests import pandas as pd from bs4 import BeautifulSoup as bs. We need a function that accepts the target URL, and gives us the proper soup object: