Extract Data from JSON
I'm working on a python question and need guidance to help me learn.
The details in the file attached
In this assignment you will write a Python program that will prompt for a URL, read the JSON data from that URL using urllib and then parse and extract the comment counts from the JSON data, compute the sum of the numbers in the file.
The closest sample code that shows how to parse JSON and extract a list is json2.py. You might also want to look at geoxml.py to see how to prompt for a URL and retrieve data from a URL.
You are provide two files for this assignment. One is a sample file (below) where we give you the sum for your testing and the other is the actual data you need to process for the assignment (provided when you launch the quiz).
- Sample data: comments_42.js (Links to an external site.) (Sum=2553)
You do not need to save these files to your folder since your program will read the data directly from the URL. Note: Each student will have a distinct data url for the assignment – so only use your own data url for analysis.
Data Format
The data consists of a number of names and comment counts in JSON as follows:
{
"note":"This file contains the sample data for testing",
"comments":[
{
"name":"Romina",
"count":97
},
{
"name":"Laurie",
"count":97
},
…
]
}
Collepals.com Plagiarism Free Papers
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we'll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Get ZERO PLAGIARISM, HUMAN WRITTEN ESSAYS
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.