Just save this as karma.py and run it with Python 3.6 or higher.
import requests
import math
INSTANCE_URL = "https://feddit.de"
TARGET_USER = "ENTER_YOUR_USERNAME_HERE"
LIMIT_PER_PAGE = 50
res = requests.get(f"{INSTANCE_URL}/api/v3/user?username={TARGET_USER}&limit={LIMIT_PER_PAGE}").json()
totalPostScore = 0
totalCommentScore = 0
page = 1
while len(res["posts"])+len(res["comments"]) > 0:
totalPostScore += sum([ x["counts"]["score"] for x in res["posts"] ])
totalCommentScore += sum([ x["counts"]["score"] for x in res["comments"] ])
page += 1
res = requests.get(f"{INSTANCE_URL}/api/v3/user?username={TARGET_USER}&limit={LIMIT_PER_PAGE}&page={page}").json()
print("Post karma: ", totalPostScore)
print("Comment karma: ", totalCommentScore)
print("Total karma: ", totalPostScore+totalCommentScore)
I’m getting back into Python for unrelated reasons, and last I was using it, JSON wasn’t on my radar yet.
I’m curious about the
.json()
method here, which seems to be exposingposts
et al. for further manipulation without parsing. Is this really as simple as it appears?Yes, it totally is that easy. At first I used an API wrapper library, but then I checked out the source and there is really no need for it since requests already handles basically everything.
.json()
takes the response body of the request and runs it throughjson.decode()
and thus spits out a nice Python dict/list structure.It is absurdly simple and powerful.
I’ve not used
requests
, but yes their docs make it look like it really is that easy: https://requests.readthedocs.io/en/latest/user/quickstart/Looks like the
.json()
call just returns a dictionary (or maybe a list of dictionaries), which means you can use all of python’s normal dictionary methods to find the data you’re looking for!Thanks for the link! This looks like an absurdly powerful library for HTTP needs and output manipulation from the perspective of a scraping neophyte.