This is a python script to generate the website GetsDrawn. It takes data from /r/RedditGetsDrawn and makes something awesome. It uses the Nikola web framework in to make a website that is mobile friendly.
The script has been rewritten several times and developed over time
The first script for rgdsnatch was written after I got banned from posting my artwork on /r/RedditGetsDrawn. The plan was to create a new site that displayed stuff from /r/RedditGetsDrawn.
Currently it only displays the most recent 25 items on redditgetsdrawn. The script looks at the newest 25 reference photos on RedditGetsDrawn. It focuses only on jpeg/png images and ignores and links to none .jpg or .png ending files. It is needed to instead of ignoring them files - get the image or images in some cases, from the link. The photos are always submitted from imgur. Still filter out the i.imgur files, but take the links and filter them through a python imgur module returning the .jpeg or .png files.
This is moving forward from rgdsnatch.py because I am stuck on it.
TODO
Fix the links that don't link to png/jpeg and link to webaddress. Needs to get the images that are at that web address and embed them.
Display artwork submitted under the images.
Upload artwork to user. Sends them a message on redditgetsdrawn with links.
More pandas
Saves reference images to imgs/year/month/day/reference/username-reference.png
Saves art images to imgs/year/month/day/art/username-line-bw-colour.png
Creates index.html file with: Title of site and logo: GetsDrawn Last updated date and time.
Path of image file /imgs/year/month/day/username-reference.png. (This needs changed to just their username).
Save off .meta data from reddit of each photo, saving it to reference folder. username-yrmnthday.meta - contains info such as author, title, upvotes, downvotes. Currently saving .meta files to a meta folder - along side art and reference.
Folder sorting system of files. websitename/index.html-style.css-imgs/YEAR(15)-MONTH(2)-DAY(4)/art-reference-meta Inside art folder Currently it generates USERNAME-line/bw/colour.png 50/50 white files. Maybe should be getting art replies from reddit?
Inside reference folder Reference fold is working decent. it creates USERNAME-reference.png / jpeg files.
Currently saves username-line-bw-colour.png to imgs folder. Instead get it to save to imgs/year/month/day/usernames.png. Script checks the year/month/day and if folder isnt created, it creates it. If folder is there, exit. Maybe get the reference image and save it with the line/bw/color.pngs
The script now filters the jpeg and png image and skips links to imgur pages. This needs to be fixed by getting the images from the imgur pages. It renames the image files to the redditor username followed by a -reference tag (and ending with png of course). It opens these files up with PIL and checks the sizes. It needs to resize the images that are larger than 800px to 800px. These images need to be linked in the index.html instead of the imgur altenatives.
Instead of the jpeg/png files on imgur they are downloaded to the server with this script.
Filter through as images are getting downloaded and if it has been less than certain time or if the image has been submitted before
Extending the subreddits it gets data from to cycle though a list, run script though list of subreddits.
Browse certain days - Current day by default but option to scroll through other days.
Filters - male/female/animals/couples etc Function that returns only male portraits. tags to add to photos. Filter images with tags
In [53]:
import os
import requests
import re
import json
import time
from time import gmtime, strftime
#import nose
import arrow
import shutil
import subprocess
import getpass
from google.cloud import translate
In [54]:
import pygcloud
In [55]:
#url = 'https://www.reddit.com/r/redditgetsdrawn.json'
#headers = {
# 'User-Agent': 'hammersmake@gmail.com',
# This is another valid field
#}
#response = requests.get(url, headers=headers)
In [56]:
def getreference(subreddit):
url = 'https://www.reddit.com/r/{}.json'.format(subreddit)
headers = {'User-Agent': 'hammersmake@gmail.com'}
# This is another valid field
return(requests.get(url, headers=headers).json())
In [72]:
subjs = getreference('redditgetsdrawn')
In [58]:
def lenrefernce():
return(len(subjs['data']['children']))
In [59]:
def listgallery(ldir):
return(os.listdir(ldir + '/galleries'))
In [60]:
#listgallery('/mnt/c/Users/luke/Documents/zh-artctrl')
In [61]:
#totart = len(resjs['data']['children'])
In [62]:
#totart
In [63]:
#resjs['data']['children'][5]['data']['id']
In [64]:
allimg = os.listdir('/mnt/c/Users/luke/Documents/zh-artctrl/galleries')
In [65]:
os.chdir('/mnt/c/Users/luke/Documents/zh-artctrl/galleries')
In [66]:
#datet = arrow.now(resjs['data']['children'][tota]['data']['created'])
In [67]:
#from google.cloud import translate
# Instantiates a client
#translate_client = translate.Client()
# The text to translate
#text = u'Hello, world!'
# The target language
#target = 'zh_cn'
# Translates some text into Russian
#translation = translate_client.translate(
# text,
# target_language=target)
#print(u'Text: {}'.format(text))
#print(u'Translation: {}'.format(translation['translatedText']))
In [68]:
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "/home/wcmckee/test.json"
In [69]:
#hanjs = requests.get('https://glosbe.com/transliteration/api?from=Han&dest=Latin&text=我&format=json')
#pingzh = hanjs.json()
#print(pingzh['text'])
In [79]:
def downloadimgs(wotsubred, ldir):
getreference(wotsubred)
resjs = subjs
#subjs
os.chdir(ldir)
for tota in range(0, lenrefernce()):
if '.jpg' or '.png' in (resjs['data']['children'][tota]['data']['url']):
idpost = resjs['data']['children'][tota]['data']['id']
authinfo = (resjs['data']['children'][tota]['data']['author'])
datet = arrow.get(resjs['data']['children'][tota]['data']['created'])
translate_client = translate.Client()
transzh = translate_client.translate(resjs['data']['children'][tota]['data']['title'], target_language='zh_cn')
hanjs = requests.get('https://glosbe.com/transliteration/api?from=Han&dest=Latin&text={}&format=json'.format(transzh['translatedText']))
pingzh = hanjs.json()
#print(pingzh['text'])
#if authinfo + '.png' not in listgallery(ldir):
#print('image not there')
subprocess.call('wget -O galleries/{}.png {}'.format(idpost, resjs['data']['children'][tota]['data']['url']), shell=True)
with open(ldir + '/posts/' + authinfo + '.meta', 'w') as aupos:
aupos.write('.. title: {}\n.. slug: {}\n.. date: {}\n.. tags: tagsz\n.. link:\n.. description:\n.. type: text'.format(idpost, idpost, datet))
with open(ldir + '/posts/' + authinfo + '.md', 'w') as aumeta:
aumeta.write('# {}\n\n![{}]({})\n\n{}'.format(authinfo, idpost, '/galleries/' + idpost + '.png', resjs['data']['children'][tota]['data']['title'] + '\n\n' + transzh['translatedText'] + '\n\n' + pingzh['text']))
#else:
# pass
#lisnum = (os.listdir(ldir).count(authinfo + '.png'))
#print(type(lisnum))
#lisnew = lisnum + 1
#subprocess.call('wget -O {}{}.png {}'.format(authinfo, lisnew, resjs['data']['children'][tota]['data']['url']), shell=True)
#with open(ldir + '/posts/' + authinfo + str(lisnew) + '.meta', 'w') as aupos:
# aupos.write('.. title: {}\n.. slug: {}\n.. date: {}\n.. tags: tagsz\n.. link:\n.. description:\n.. type: text'.format(authinfo, authinfo, datet))
#with open(ldir + '/posts/' + authinfo + str(lisnew) + '.md', 'w') as aumeta:
# aumeta.write('![{}]({})\n\n{}'.format(authinfo, '/galleries/' + authinfo + str(lisnew) + '.png', resjs['data']['children'][15]['data']['title']))
In [80]:
downloadimgs('redditgetsdrawnbadly', '/mnt/c/Users/luke/Documents/zh-artctrl')
In [ ]:
In [45]:
#pygcloud.createbucket('getsdrawn')
In [46]:
#pygcloud.makebucketpublic('lionsheart')
In [47]:
def deploysite(blogdir, syncname):
os.chdir(blogdir)
subprocess.call('nikola build', shell=True)
pygcloud.makebucketsync(syncname, blogdir + '/output')
In [78]:
deploysite('/mnt/c/Users/luke/Documents/zh-artctrl', 'getsdrawn')
In [49]:
def submitart(blogdir, idreference):
os.chdir(blogdir)
for root, dirs, files in os.walk(blogdir + '/galleries/', topdown = False):
for name in files:
if idreference in name:
print(os.path.join(root, name))
#for name in dirs:
In [50]:
#imagefile = submitart('/mnt/c/Users/luke/Documents/zh-artctrl', 'aco1f3')
In [51]:
from PIL import Image
In [27]:
import shutil
In [36]:
def createart(blogdir, idreference, saveloc):
img = Image.open(blogdir + '/galleries/' + idreference + '.png')
basewidth = 400
wpercent = (basewidth/float(img.size[0]))
hsize = int((float(img.size[1])*float(wpercent)))
img = img.resize((basewidth,hsize), Image.ANTIALIAS)
#im = Image.new("RGB", (4960, 3508), "white")
newImage = Image.new('RGB', (4960, 3508), 'WHITE')
newImage.paste(img)
newImage.save(saveloc + '/' + idreference + '-reference.png')
In [41]:
def batchcreateart(blogdir):
idref = os.listdir(blogdir + '/galleries/')
for idr in idref:
createart(blogdir, idr.replace('.png', ''), '/home/wcmckee/imgtest/')
In [42]:
batchcreateart('/mnt/c/Users/luke/Documents/zh-artctrl')
In [31]:
def chopart(blogdir, idrefernce, saveloc):
#removes the reference image from finished artwork by croping the left of image by 200 px
img = Image.open(blogdir + '/galleries/' + idrefernce + '-reference.png')
img.crop((400,0,0,0))
img.save(saveloc + '/' + idrefernce + '-final.png')
In [32]:
createart('/mnt/c/Users/luke/Documents/zh-artctrl', 'acwwcp')
In [ ]:
In [33]:
chopart('/mnt/c/Users/luke/Documents/zh-artctrl', 'acwwcp')
In [91]:
from IPython.display import Image as imgdisplay
In [82]:
#imgdisplay('/mnt/c/Users/luke/Documents/zh-artctrl/galleries/aco1f3.png')
In [ ]:
In [34]:
from PIL import Image
# Create an Image object from an Image
imageObject = Image.open('/mnt/c/Users/luke/Documents/zh-artctrl/galleries/art_control-acvivp.png')
# Crop the iceberg portion
cropped = imageObject.crop((200,0,0,0))
# Display the cropped portion
cropped.show()
In [35]:
cropped.save('crop.png')
In [ ]:
imgdisplay(filename='crop.png')
In [89]:
def copytousb(blogdir, idreference, artistname, copyto):
shutil.copyfile(blogdir + '/galleries/' + artistname + '-' + idreference + '.png', copyto + artistname + '-' + idreference + '.png')
In [90]:
#copytousb('/mnt/c/Users/luke/Documents/zh-artctrl', 'acvivp', 'art_control', '/mnt/f')
In [38]:
imgdisplay(filename='/mnt/f/art_control-acvivp.png')
In [76]:
##createart('/mnt/c/Users/luke/Documents/zh-artctrl', 'acvivp', 'art_control')
In [77]:
#imgdisplay(filename='/mnt/c/Users/luke/Documents/zh-artctrl/galleries/art_control-acvivp.png')
Out[77]:
In [45]:
img = Image.open('/mnt/c/Users/luke/Documents/zh-artctrl/galleries/aco1f3.png')
In [59]:
basewidth = 200
In [46]:
wpercent = (basewidth/float(img.size[0]))
hsize = int((float(img.size[1])*float(wpercent)))
img = img.resize((basewidth,hsize), Image.ANTIALIAS)
img.save('test.png')
In [60]:
im = Image.new("RGB", (2000, 1000), "white")
In [61]:
im.size
Out[61]:
In [62]:
newImage.size
Out[62]:
In [69]:
newImage = Image.new('RGB', (800, 600), 'WHITE')
newImage.paste(img)
In [70]:
newImage.save('img.png')
In [71]:
#imgdisplay(filename='img.png')
Out[71]:
In [ ]:
In [47]:
#from IPython.display import Image as imgdisplay
#imgdisplay(filename='test.png')
Out[47]:
In [41]:
from PIL import Image
basewidth = 300
img = Image.open(imagefile)
wpercent = (basewidth/float(img.size[0]))
hsize = int((float(img.size[1])*float(wpercent)))
img = img.resize((basewidth,hsize), Image.ANTIALIAS)
img.save('test.png')
In [ ]:
In [65]:
#from codeq_nlp_api import CodeqClient
#client = CodeqClient(user_id="", user_key="")
#text = "We’d love to see your take on our little zoo family!"
#document = client.analyze(text)
#for sentence in document.sentences:
# print(sentence.sentiment)
In [67]:
##for sentence in document.sentences:
# print(sentence.emotions)
In [69]:
##for sentence in document.sentences:
# print(sentence.to_dict())
In [ ]: