hamentorwrite

Hamilton Mentor Write

Script to write login of Hamilton Computer Mentors.

Config file contains default hours (start time, end time), name, library, Looks up date. Asks for amount of people helped, type of help (tags - choose from popular choices), device helped with (tags - choose from popular choices), and comments.

Could the config file just be a json object?

In [69]:
import arrow
import json
import os
In [70]:
arnow = arrow.now()
In [71]:
thedate = str(arnow.date())
In [72]:
str(arnow.time())
Out[72]:
'15:21:22.687083'
In [73]:
startime = ('13:00:00')
In [74]:
endtime = ('15:00:00')
In [75]:
myname = input('Name: ')
Name: William
In [76]:
mylib = ('GP')
In [77]:
typehelp = input('Type of help given: ')
Type of help given: helped with cv
In [78]:
devtype = input('Device: ')
Device: tablet
In [79]:
fincomments = input('Comments: ')
Comments: today i helped lots of people
In [80]:
anodict = dict()
In [81]:
defdict = {'name' : myname, 'startdatetime' : thedate + ' ' + startime, 'enddatetime' : thedate + ' ' + endtime, 
           'library' : 'GP', 'help count' : int(pplhelped), 'help' : typehelp, 'device' : devtype, 
           'comments': fincomments}
In [82]:
valddict = defdict.values()
In [83]:
valddict
Out[83]:
dict_values(['GP', 'tablet', 3, 'William', 'helped with cv', '2015-10-14 13:00:00', 'today i helped lots of people', '2015-10-14 15:00:00'])

What I want to do is take this dict and create columns out of the keys and rows of values under the column.

Could this perhaps works as an api - saving the latest as 0.

Create a json file that the json object is saved to. This script opens the json file and appends the new json object to it. It counts how many are already in the json file and assigns a +1 to it.

For now could just check hamcommen for amount of json objects and create a new +1

Not happy with creating lots of .json files in the folder, but whatever this will do for now.

In [84]:
lishamcomen = os.listdir('/home/wcmckee/hamcommen/')
In [85]:
lecomen = len(lishamcomen)
In [86]:
newlecom = lecomen + 1
In [90]:
wthamcom = open('/home/wcmckee/hamcommen/index' + str(newlecom) + '.json', 'w')
In [91]:
wthamcom.write(json.dumps(defdict))
Out[91]:
222
In [93]:
wthamcom.close()

parseduxml

parseduxml

script to parse rss xml feed from http://www.education.govt.nz/

Had problems parsing the html so i decided to add a try and pass if exception. It fixes it but does this mean it's going to save less data?

Saving it as json. I'll make another script to build the nikola site from the json object it creates.

Missing author. Why not just make moe the author?

No post tags. Would be nice to tag things. eg: schools, subject matter, areas etc.

In [ ]:
 
In [41]:
import requests
import xmltodict
import json
In [22]:
reqedu = requests.get('http://www.education.govt.nz/rss.xml')
In [23]:
diedu = xmltodict.parse(reqedu.text)
In [24]:
ledi = len(diedu['rss']['channel']['item'])
In [33]:
nodict = dict()

Remove all - in title. Change space to - Why is this only working on the last element.

In [26]:
nospap= list()
In [27]:
siteloc = '/home/wcmckee/minrss/posts/'
In [39]:
for isz in range(0, ledi):
    try:
        postdict = dict()
        #diet = (diedu['rss']['channel']['item'][isz]['title'])
        repsi = (diedu['rss']['channel']['item'][isz]['title'].replace('–', ''))
        
        nospac = repsi.replace(' ', '-')
        postdict.update({'title' : repsi})
        
        #oprst = open(siteloc + diet + '.rst', 'w')
        #oprst.write(diet)
        deitz = (diedu['rss']['channel']['item'][isz]['description'])
        postdict.update({'description' : deitz})
        pubd = (diedu['rss']['channel']['item'][isz]['pubDate'])
        links = (diedu['rss']['channel']['item'][isz]['link'])
        postdict.update({'publish' : pubd, 'link' : links})
    
        
        pocopy = postdict.copy()
        
        nodict.update({isz : pocopy})
        #opmets = open(siteloc + deitz + '.meta', 'w')
        #opmets.write()
        #print(deitz)
        #oprst.close()
        #opmets

    except Exception:
        pass
In [46]:
jsdic = json.dumps(nodict)
In [48]:
wrimd = open('/home/wcmckee/minrss/moenews.json', 'w')
In [49]:
wrimd.write(jsdic)
Out[49]:
95284
In [50]:
wrimd.close()

ccslis

ccslis

create daily notices for all schools in /home/wcmckee/ccschol

Report back on all signins.

Live roll data.

Students and Staff Payroll on signinlca/usernames.

Create blog post on Nikola blog with student profile. Post profile contains student details and link to their profile blog.

Turn py file into ipynb. Seprate cells in py file with template code.

First creates blogs for every school. Creates post of each student on blog. Create blog of each student. Create blog post of every day they were enrolled at a school.

In [9]:
import os
import pandas
import arrow

walk folderr for all ipynb not in wcm.com posts folder.

Two lists 0 - .ipynb files not in wcm.com 1 - .ipynb files in wcm.com

In [10]:
arti = arrow.now()
In [11]:
arti
Out[11]:
<Arrow [2015-09-30T08:55:59.179386+13:00]>
In [23]:
opcomp = os.listdir('/home/wcmckee/github/')
In [ ]:
 
In [ ]:
 
In [24]:
opcomp
Out[24]:
['pydnz',
 'getsdrawndotcom',
 'signinlca',
 'lcacoffee',
 'wcmckee.com',
 'wcmckee-notebook',
 'pianobar',
 'wcmbbdotcom',
 'nikola-site',
 'jupyterhub_cookie_secret',
 'superblog',
 'pyshorteners',
 'wcm',
 'garrison-wow-track',
 'niketa',
 'icalendar',
 'devstack',
 'nikola',
 'config.txt',
 'pytm',
 'chriswarrick.com',
 'wcm.com',
 'control-pianobar',
 'ece-display',
 'cv',
 'hamiltoncomputerclub.org.nz',
 'jupyterhub.sqlite',
 'cs4hs',
 'wcmckee',
 'brobeurdotcom',
 'bacongamejam05',
 'pelican-themes',
 'gwt']
In [21]:
alccs = os.listdir('/home/wcmckee/ccschol/')
In [15]:
pandas.TimeSeries(arti)
Out[15]:
0    2015-09-30T08:55:59.179386+13:00
dtype: object
In [22]:
for alc in alccs:
    print(alc)
te-wharekura-o-maniapoto
ardmore
shotover-school
hobsonville-point-secondary-school
south-hornby
thorndon
leigh-school
st-josephs-upper-hutt
green-bay
oaklands-
hampden-street
rangiora-borough
pegasus-bay
te-pa-o-rakaihautu
paparoa-street
te-one
kaikohe-west
thorrington
grey-lynn
ohoka
westlake-girls-high-school
merrin
stonefields-school
sylvia-park-school
roydvale
marlborough-primary
ward
raphael-house
whau-valley
taipa-area-school-
pukekohe-intermediate
ashburton-intermediate
onewhero-area
rawhiti
horowhenua-college
our-lady-star-of-the-sea-sumner
taradale-intermediate
eastern-hutt
st-marys-catholic-tauranga
windwhistle
northcote-college
diamond-harbour
barton-rural
orewa-college
ebbett-park
shirley-intermediate
sumner
arrowtown
taupaki-school
yaldhurst-model
nelson-park
wellington-high-school
redwood-tawa
somerfield
marewa
albany-senior-high-school
middleton-grange-school
kaingaroa-chatham-islands
cheviot-area
okaihau-college
runanga
hutt-valley-high-school
our-lady-of-victoriea
burnside-high-school
hillpark
tkkm-o-te-atihaunui-a-paparangi
tawa-intermediate
auckland-girls-grammar-school
broadfield
westmere
kaikoura-suburban
whangaparaoa-college
our-lady-of-snows-methven
waikato-diocesan
warrington-school
banks-avenue
cobham-intermediate
st-patrick’s-bryndwr
hobsonville-point-primary-school
te-hihi
st-patrick’s-kaiapoi
springston-school
nayland-college
elmwood-normal
pakuranga-heights-school
sacred-heart-catholic
eskdale
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [19]:
pver = pandas.version
In [20]:
str(pver)
Out[20]:
"<module 'pandas.version' from '/usr/lib/python3/dist-packages/pandas/version.py'>"
In [32]:
str(arti.date())
Out[32]:
'2015-09-28'
In [33]:
str(arti.time())
Out[33]:
'05:22:29.894485'
In [37]:
for alc in alccs:
    print(alc)
    oprsf = open('/home/wcmckee/ccschol/' + alc + '/posts/test.rst', 'w')
    oprsf.write('Hello ' + alc)
    oprsf.close()
    
    opmet = open('/home/wcmckee/ccschol/' + alc + '/posts/test.meta', 'w')
    
    opmet.write(alc + '\n' + alc + '\n' + str(arti.date()) + ' ' + str(arti.time()))
    opmet.close()
te-wharekura-o-maniapoto
ardmore
shotover-school
hobsonville-point-secondary-school
south-hornby
thorndon
leigh-school
st-josephs-upper-hutt
green-bay
oaklands-
hampden-street
rangiora-borough
pegasus-bay
te-pa-o-rakaihautu
paparoa-street
te-one
kaikohe-west
thorrington
grey-lynn
ohoka
westlake-girls-high-school
merrin
stonefields-school
sylvia-park-school
roydvale
marlborough-primary
ward
raphael-house
whau-valley
taipa-area-school-
pukekohe-intermediate
ashburton-intermediate
onewhero-area
rawhiti
horowhenua-college
our-lady-star-of-the-sea-sumner
taradale-intermediate
eastern-hutt
st-marys-catholic-tauranga
windwhistle
northcote-college
diamond-harbour
barton-rural
orewa-college
ebbett-park
shirley-intermediate
sumner
arrowtown
taupaki-school
yaldhurst-model
nelson-park
wellington-high-school
redwood-tawa
somerfield
marewa
albany-senior-high-school
middleton-grange-school
kaingaroa-chatham-islands
cheviot-area
okaihau-college
runanga
hutt-valley-high-school
our-lady-of-victoriea
burnside-high-school
hillpark
tkkm-o-te-atihaunui-a-paparangi
tawa-intermediate
auckland-girls-grammar-school
broadfield
westmere
kaikoura-suburban
whangaparaoa-college
our-lady-of-snows-methven
waikato-diocesan
warrington-school
banks-avenue
cobham-intermediate
st-patrick’s-bryndwr
hobsonville-point-primary-school
te-hihi
st-patrick’s-kaiapoi
springston-school
nayland-college
elmwood-normal
pakuranga-heights-school
sacred-heart-catholic
eskdale
In [38]:
for alc in alccs:
    print(alc)
    os.chdir('/home/wcmckee/ccschol/' + alc)
    os.system('nikola build')
    
te-wharekura-o-maniapoto
ardmore
shotover-school
hobsonville-point-secondary-school
south-hornby
thorndon
leigh-school
st-josephs-upper-hutt
green-bay
oaklands-
hampden-street
rangiora-borough
pegasus-bay
te-pa-o-rakaihautu
paparoa-street
te-one
kaikohe-west
thorrington
grey-lynn
ohoka
westlake-girls-high-school
merrin
stonefields-school
sylvia-park-school
roydvale
marlborough-primary
ward
raphael-house
whau-valley
taipa-area-school-
pukekohe-intermediate
ashburton-intermediate
onewhero-area
rawhiti
horowhenua-college
our-lady-star-of-the-sea-sumner
taradale-intermediate
eastern-hutt
st-marys-catholic-tauranga
windwhistle
northcote-college
diamond-harbour
barton-rural
orewa-college
ebbett-park
shirley-intermediate
sumner
arrowtown
taupaki-school
yaldhurst-model
nelson-park
wellington-high-school
redwood-tawa
somerfield
marewa
albany-senior-high-school
middleton-grange-school
kaingaroa-chatham-islands
cheviot-area
okaihau-college
runanga
hutt-valley-high-school
our-lady-of-victoriea
burnside-high-school
hillpark
tkkm-o-te-atihaunui-a-paparangi
tawa-intermediate
auckland-girls-grammar-school
broadfield
westmere
kaikoura-suburban
whangaparaoa-college
our-lady-of-snows-methven
waikato-diocesan
warrington-school
banks-avenue
cobham-intermediate
st-patrick’s-bryndwr
hobsonville-point-primary-school
te-hihi
st-patrick’s-kaiapoi
springston-school
nayland-college
elmwood-normal
pakuranga-heights-school
sacred-heart-catholic
eskdale
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 

niktrans

NikTrans

Python script to create Nikola sites from a list of schools. Edits conf.py file for site name and licence.

In [5]:
import os
import json
In [ ]:
os.system('python3 nikoladu.py')
os.chdir('/home/wcmckee/nik1/')
os.system('nikola build')
os.system('rsync -azP /home/wcmckee/nik1/* wcmckee@wcmckee.com:/home/wcmckee/github/wcmckee.com/output/minedujobs')
In [6]:
opccschho = open('/home/wcmckee/ccschool/cctru.json', 'r')
In [7]:
opcz = opccschho.read()
In [8]:
rssch = json.loads(opcz)
In [9]:
filrma = ('/home/wcmckee/ccschol/')
In [10]:
for rs in rssch.keys():
    hythsc = (rs.replace(' ', '-'))
    hylow = hythsc.lower()
    hybrac = hylow.replace('(', '')
    hybaec = hybrac.replace(')', '')
    os.mkdir(filrma + hybaec)
    
    os.system('nikola init -q ' + filrma + hybaec)
    

I want to open each of the conf.py files and replace the nanme of the site with hythsc.lower

Dir /home/wcmckee/ccschol has all the schools folders. Need to replace in conf.py Demo Name with folder name of school.

Schools name missing characters - eg ardmore

In [11]:
lisschol = os.listdir('/home/wcmckee/ccschol/')
In [12]:
findwat = ('LICENSE = """')
In [13]:
def replacetext(findtext, replacetext):
    for lisol in lisschol:
        filereaz = ('/home/wcmckee/ccschol/' + hybaec + '/conf.py')
        f = open(filereaz,'r')
        filedata = f.read()
        f.close()

        newdata = filedata.replace(findtext, '"' + replacetext + '"')
        #print (newdata)
        f = open(filereaz,'w')
        f.write(newdata)
        f.close()
In [14]:
replacetext('LICENSE = """', 'LICENSE = """<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons Attribution 4.0 International License" style="border-width:0; margin-bottom:12px;" src="https://i.creativecommons.org/l/by/4.0/88x31.png"></a>"')
In [15]:
licfil = 'LICENSE = """<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons Attribution 4.0 International License" style="border-width:0; margin-bottom:12px;" src="https://i.creativecommons.org/l/by/4.0/88x31.png"></a>"'
In [16]:
opwcm = ('/home/wcmckee/github/wcm.com/conf.py')
In [17]:
for lisol in lisschol:
    print (lisol)
    rdwcm = open(opwcm, 'r')
    
    filewcm = rdwcm.read()
    newdata = filewcm.replace('wcmckee', lisol)

    rdwcm.close()
    #print (newdata)
    
    f = open('/home/wcmckee/ccschol/' + lisol + '/conf.py','w')
    f.write(newdata)
    f.close()
te-wharekura-o-maniapoto
ardmore
shotover-school
hobsonville-point-secondary-school
south-hornby
thorndon
leigh-school
st-josephs-upper-hutt
green-bay
oaklands-
hampden-street
rangiora-borough
pegasus-bay
te-pa-o-rakaihautu
paparoa-street
te-one
kaikohe-west
thorrington
grey-lynn
ohoka
westlake-girls-high-school
merrin
stonefields-school
sylvia-park-school
roydvale
marlborough-primary
ward
raphael-house
whau-valley
taipa-area-school-
pukekohe-intermediate
ashburton-intermediate
onewhero-area
rawhiti
horowhenua-college
our-lady-star-of-the-sea-sumner
taradale-intermediate
eastern-hutt
st-marys-catholic-tauranga
windwhistle
northcote-college
diamond-harbour
barton-rural
orewa-college
ebbett-park
shirley-intermediate
sumner
arrowtown
taupaki-school
yaldhurst-model
nelson-park
wellington-high-school
redwood-tawa
somerfield
marewa
albany-senior-high-school
middleton-grange-school
kaingaroa-chatham-islands
cheviot-area
okaihau-college
runanga
hutt-valley-high-school
our-lady-of-victoriea
burnside-high-school
hillpark
tkkm-o-te-atihaunui-a-paparangi
tawa-intermediate
auckland-girls-grammar-school
broadfield
westmere
kaikoura-suburban
whangaparaoa-college
our-lady-of-snows-methven
waikato-diocesan
warrington-school
banks-avenue
cobham-intermediate
st-patrick’s-bryndwr
hobsonville-point-primary-school
te-hihi
st-patrick’s-kaiapoi
springston-school
nayland-college
elmwood-normal
pakuranga-heights-school
sacred-heart-catholic
eskdale
In [18]:
for rdlin in rdwcm.readlines():
    #print (rdlin)
    if 'BLOG_TITLE' in rdlin:
        print (rdlin)
        
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-18-46372e43ced8> in <module>()
----> 1 for rdlin in rdwcm.readlines():
      2     #print (rdlin)
      3     if 'BLOG_TITLE' in rdlin:
      4         print (rdlin)
      5 

ValueError: I/O operation on closed file.
In [ ]:
for lisol in lisschol:
    print (lisol)
    hythsc = (lisol.replace(' ', '-'))
    hylow = hythsc.lower()
    hybrac = hylow.replace('(', '')
    hybaec = hybrac.replace(')', '')
    filereaz = ('/home/wcmckee/ccschol/' + hybaec + '/conf.py')
    f = open(filereaz,'r')
    filedata = f.read()
    f.close()

    newdata = filedata.replace('LICENCE = """', licfil )
    #print (newdata)
    f = open(filereaz,'w')
    f.write(newdata)
    f.close()
In [ ]:
for lisol in lisschol:
    print (lisol)
    hythsc = (lisol.replace(' ', '-'))
    hylow = hythsc.lower()
    hybrac = hylow.replace('(', '')
    hybaec = hybrac.replace(')', '')
    filereaz = ('/home/wcmckee/ccschol/' + hybaec + '/conf.py')
    f = open(filereaz,'r')
    filedata = f.read()
    f.close()

    newdata = filedata.replace('"Demo Site"', '"' + hybaec + '"')
    #print (newdata)
    f = open(filereaz,'w')
    f.write(newdata)
    f.close()
In [ ]:
for lisol in lisschol:
    print (lisol)
    hythsc = (lisol.replace(' ', '-'))
    hylow = hythsc.lower()
    hybrac = hylow.replace('(', '')
    hybaec = hybrac.replace(')', '')
    filereaz = ('/home/wcmckee/ccschol/' + hybaec + '/conf.py')
    f = open(filereaz,'r')
    filedata = f.read()
    f.close()

    newdata = filedata.replace('"Demo Site"', '"' + hybaec + '"')
    #print (newdata)
    f = open(filereaz,'w')
    f.write(newdata)
    f.close()

Perform Nikola build of all the sites in ccschol folder

In [ ]:
buildnik = input('Build school sites y/N ')
In [ ]:
for lisol in lisschol:
    print (lisol)
    os.chdir('/home/wcmckee/ccschol/' + lisol)
    if 'y' in buildnik:
        os.system('nikola build')
In [ ]:
makerst = open('/home/wcmckee/ccs')
In [ ]:
for rs in rssch.keys():
    hythsc = (rs.replace(' ', '-'))
    hylow = hythsc.lower()
    hybrac = hylow.replace('(', '-')
    hybaec = hybrac.replace(')', '')
    
    #print (hylow())
    filereaz = ('/home/wcmckee/ccschol/' + hybaec + '/conf.py')
    f = open(filereaz,'r')
    filedata = f.read()
    

    newdata = filedata.replace("Demo Site", hybaec)
    f.close()
    f = open(filereaz,'w')
    f.write(newdata)
    f.close()

hamcompmentor

Hamilton Computer Mentor

Script to parse xls file of Hamilton Computer Mentor Data

I want to skiprows on the excelfile when it open it like i do with read_excel. Other option is to get the excel file sheets saved off as

Script to write the excel file. It knows the date and time (fridays 1-3 for me). It knows my name (william). It asks for input on the help I provided, and amount of people assisted.

Clean up current data. reduced date down to set, need to join times and date, recreate the data entry.

Make it a json object.

update for a login. name, date/time,

In [ ]:
 
In [ ]:
import pandas
import numpy as np
In [ ]:
menfor = pandas.read_excel('/home/wcmckee/Desktop/mentorform.xlsx', skiprows=5)
In [ ]:
shenam = pandas.ExcelFile('/home/wcmckee/Desktop/mentorform.xlsx')
In [ ]:
menlis = list()
In [ ]:
blahmen =menfor.Mentor.get_values()
In [ ]:
lisitem = list()
In [ ]:
setitem = set(lisitem)
In [ ]:
setitem
In [ ]:
lisppl = list()
In [ ]:
for blst in set(blahmen):
    print (blst)
    lisppl.append(blst)
In [ ]:
for lisp in lisppl[1:]:
    print(lisp.lower())
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
tidaf = pandas.DataFrame(pandas.to_datetime(menfor.Time))
In [ ]:
tidaf
In [ ]:
datpan = pandas.DataFrame(pandas.to_datetime(menfor.Date))
In [ ]:
retpan = pandas.DataFrame(tidaf + datpan)
In [ ]:
tidap = tidaf.join(datpan)
In [ ]:
tidap.count()
In [ ]:
tidap.columns
In [ ]:
(tidap.index)
In [ ]:
tidate = tidap.Date
In [ ]:
tival = set(tidate.values)
In [ ]:
len(tival)
In [ ]:
for tiv in tival:
    print(tiv)
In [ ]:
for ti in tival:
    print(ti)
    pantimes = pandas.DataFrame(ti)
In [ ]:
datpan
In [ ]:
for men in menfor:
    print(men)
In [ ]:
for mef in menfor.Comments:
    print(mef)
In [ ]:
for bla in blahmen:
    print(bla)
    lisitem.append(bla)
In [ ]:
menlis.append(menfor.Mentor)
In [ ]:
menlis
In [ ]:
shn = shenam.sheet_names
In [ ]:
shelist = list()
In [ ]:
shn
In [ ]:
for sh in shn:
    #print(sh)
    shelist.append((shenam.parse(sh)))
In [ ]:
hamlend = len(shelist)
In [ ]:
hamlend
In [ ]:
lisshez = list()
In [ ]:
li
In [ ]:
for lish in range(hamlend):
    print(lisshez[lish])
In [ ]:
 
In [ ]:
for lisz in lisshez:
    print(lisz.to_html)
In [ ]:
for shl in range(hamlend):
    print(shelist[shl])
    lisshez.append((shelist[shl]))
In [ ]:
 
In [ ]:
lisshez
In [ ]:
lenlis = len(lisshez)
In [ ]:
lisho = lisshez[3]
In [ ]:
lide = lisho[4:]
In [ ]:
lide
In [ ]:
for lel in range(lenlis):
    print (lisshez[lel])
In [ ]:
 
In [ ]:
 
In [ ]:
liszer = lisshez[0]
In [ ]:
for lisk in liszer:
    print(lisk)
In [ ]:
mentim = menfor.Time
In [ ]:
menvals = mentim.values
In [ ]:
lismev = list()
In [ ]:
 
In [ ]:
for mev in menvals:
    print(mev)
    lismev.append(mev)
In [ ]:
lisde = set(lismev)
In [ ]:
lisfix = list()
In [ ]:
a = lisde[~np.is(lisde)]
In [ ]:
a
In [ ]:
 
In [ ]:
for ilre in lisde:
    print (ilre)
    #print(type(ilre))
    #print (ilre[:])
    #lisfix.append(ilre)
    #il = (ilre.replace(' ', ''))
    #print(il)
In [ ]:
for lis in lisfix:
    print(str(lis.replace(' ', '')))
In [ ]:
allibs = menfor.Library.valuesal
In [ ]:
albs = set(allibs)
In [ ]:
albs

resume

William C Mckee Email: will (at) artcontrol.me Phone: 0223721475

Art, Code, Photograhy, Video, Teacher, Public Speaker.

Education

Diploma with Honors in Art and Creativity. The Learning Connexion. July 2010 - December 2012.

Studied Drawing and Design. Palmerston North School Of Design. October 2008 - July 2010.

Studied Applied Visual Imaging. Palmerston North UCOL. 2008 - 2009.

NCEA Level 2. Horowhenua College, Levin. 2002 - 2006

Work

Education Support Worker, Ministry of Education. Working at Whaihanga Early Learning Centre. October 2014 - December 2014.

Video Game Development tutor. Chalkle. September 2013.

Casual Life modeling. The Learning Connection. 2011 - 2012.

Mail Delivery. Kiwi Mail. 2005.

Volunteer

KiwiPyCon2015. September 2015. AV team - recording talks.

KiwiJam2015. July 2015. Event Photographer.

linux.conf.au. January 10-16 2015. AV team - recording talks. http://linux.conf.au/

Te Whare O Te Ata (Fairfield Community Centre). February 2014 - September 2014. Working with children on IT and art. Learning Linux sysadmin. http://fairfield.org.nz/

Python Weekly Classes. 2014. Te Whare O Te Ata. Wednesday night programming and computer help.

Whaihanga Early Learning Centre. 2014. Volunteer helper. Art, building with blocks and morning walks.

SeniorNet, Levin. 2013. Helped elderly with their digital devices.

Company Branding Shop. 2010. Screen printing. http://shirt.co.nz

Speaker

Hamilton Python User Group. September 2015. Talk on KiwiPyCon2015.

KiwiPyCon2015. September 2015. Lightning talk on GetsDrawn.

Hamilton Python User Group. August 2015. Talk on Dominate library.

linux.conf.au. Jan 2015. Lightning Talk at astro miniconf regarding IPython Notebook. http://bit.ly/15zGtNC

Hamilton Computer Club. 2013 - 2014. Member and speaker Feb 2014 on Python programming language.

Hamilton Linux Users Group. 2014. Speaker on GoDot Game Engine.

Kiwi PyCon 2013. Lightning Talk regarding lastfm api. http://bit.ly/18dgBJb

Projects

ArtControl.me: The Art Of William Mckee. 2010 - 2015. Blog for uploading and discussing artwork. Pencil Drawing, Digital Painting. Life drawing, portraits, street scenes, landscapes. http://artcontrol.me CC:BY licence.

BroBeur Studios: Video Game Development. 2012 - 2013. 13 game on Google Play Store. Majority solo, created between 48 hours and 7days. Collab with others in small 2-4 people teams. Involve in Global Game Jam 2013, 2014, 2015. http://brobeur.com

FreshFigure Photography. 2012 - 2013. Stock Photography site to host photography. Street and landscapes. Photography used by Horowhenua Mail, March 2013. CC:BY licence.

WCMCKEE: Web and Software Development. 2011 - Ongoing. Python programming. Web scalping, data processing, website generation, point of sale, cyber cafe management system. Sites generatored include http://getsdrawn.com and LCA Signin (digital signin/out system). http://wcmckee.com MIT licence. Github: https://github.com/wcmckee/

In [ ]:
 

nikoladu

Nikoladu

Script to take json object of ministry of education jobs and create Nikola blog posts - rst and meta files.

In [1]:
#import nikola

import requests
import json
import pandas
In [ ]:
 
In [2]:
opedu = open('/home/wcmckee/github/wcmckee.com/output/minedujobs/index.json', 'r')
In [3]:
minjob = opedu.read()
In [4]:
dicminj = json.loads(minjob)
In [5]:
ldic = len(dicminj)
In [ ]:
 
In [7]:
catlis = list()

loclis = list()
datlis = list()
jobti = list()
In [ ]:
 
In [8]:
numdic = dict()
In [ ]:
 
In [9]:
for ldi in range(ldic):
    dicjob = dict()
    catedi = (dicminj[str(ldi)]['Category'])
    locdi = (dicminj[str(ldi)]['Location'])
    datdi = (dicminj[str(ldi)]['Date Advertised'])
    pandatz = pandas.to_datetime(datdi)
    pdate = pandatz.date()
    titdi = (dicminj[str(ldi)]['Job Title'])
    
    jobref = (dicminj[str(ldi)]['Job Reference'])
    jorefd = jobref[4:]
    #print (jorefd)
    
    skildi = (dicminj[str(ldi)]['lidocend'])
    
    #for ski in skildi:
        #print (ski)
        #for sk in ski:
            #print (sk)
    #print (titdi + '\n' + skildi)
            
            
    
    opmetf = open('/home/wcmckee/minstryofedu/posts/' + jorefd + '.meta', 'w')
    opmetf.write(jorefd + '\n' + jorefd + '\n' + str(pdate) + ' ' + str('09:00:00') + '\n' + catedi + ', ' + locdi)
    opmetf.close()
    
    oprstfi = open('/home/wcmckee/minstryofedu/posts/' + jorefd + '.rst', 'w')
    oprstfi.write(titdi)
    for ski in skildi:
        #print (ski)
        #for sk in ski:
        #    print (sk)
        oprstfi.write(str(ski))
    
    oprstfi.close()



    dicjob.update({'Category' : catedi, 'Date Advertised' : str(pdate), 'Job Title' : titdi,
    'Location' : locdi, 'Job Reference' : jobref})
    
    numdic.update({ldi : dicjob})
    #numdic.update({ldi : dicjob})
    
    loclis.append(locdi)
    datlis.append(datdi)
    jobti.append(titdi)
    
    nedicf = dicjob.copy()
    nedicf.update(nedicf)
    
    numdic.update({ldi : nedicf})
    
    #if 'education' in catedi:
    #    print (catedi)

        
In [11]:
allpda = list()
In [ ]:
 
In [12]:
for dal in datlis:
    allpdata = pandas.to_datetime(dal)
    allpda.append(allpdata)
In [15]:
datsli = list(set(datlis))
In [16]:
import arrow
In [17]:
panlis = list()
In [18]:
for dalz in datsli:
    print (dalz)
    panlis.append(pandas.to_datetime(dalz))
    
21-AUG-15
12-AUG-15
02-SEP-15
01-SEP-15
28-AUG-15
17-AUG-15
13-AUG-15
20-AUG-15
31-AUG-15
26-AUG-15
19-AUG-15
14-AUG-15
25-AUG-15
In [19]:
for panl in panlis:
    #print (panl.dayofweek)
    print (panl.dayofyear)
    print (panl.date())
233
2015-08-21
224
2015-08-12
245
2015-09-02
244
2015-09-01
240
2015-08-28
229
2015-08-17
225
2015-08-13
232
2015-08-20
243
2015-08-31
238
2015-08-26
231
2015-08-19
226
2015-08-14
237
2015-08-25
In [20]:
catsli = list(set(catlis))
In [21]:
locset = list(set(loclis))
In [22]:
locset
Out[22]:
['Wellington',
 'Otago',
 'Canterbury',
 'Bay of Plenty',
 'Gisborne',
 'Manawatu',
 'Auckland',
 'Whangarei',
 'Napier',
 'Whanganui']

brobeurtweet

BroBeur Tweet

Sends tweets for brobeur

In [12]:
from TwitterFollowBot import TwitterBot
import praw
import random
import tweepy
In [14]:
my_bot = TwitterBot()
---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-14-fa8d504ee7f0> in <module>()
----> 1 my_bot = TwitterBot()

/usr/local/lib/python3.5/dist-packages/TwitterFollowBot/__init__.py in __init__(self, config_file)
     40         self.TWITTER_CONNECTION = None
     41 
---> 42         self.bot_setup(config_file)
     43 
     44         # Used for random timers

/usr/local/lib/python3.5/dist-packages/TwitterFollowBot/__init__.py in bot_setup(self, config_file)
     80                 line = line.split(":")
     81                 parameter = line[0].strip()
---> 82                 value = line[1].strip()
     83 
     84                 if parameter in ["USERS_KEEP_FOLLOWING", "USERS_KEEP_UNMUTED", "USERS_KEEP_MUTED"]:

IndexError: list index out of range
In [3]:
r = praw.Reddit('brobeurtweet')
In [4]:
subredz = ['DevBlogs', 'gamedev', 'gamejams', 'Games', 'gaming']
In [5]:
randsubrepo = random.choice(subredz)
In [6]:
hashthi = ('#' + randsubrepo)
In [7]:
rgvz = r.get_subreddit(randsubrepo)
In [8]:
rgtnew = rgvz.get_new
In [9]:
ransub = rgvz.get_random_submission()
In [10]:
rantit = ransub.title
In [ ]:
 
In [11]:
randurl = ransub.url
In [12]:
my_bot.send_tweet(rantit + ' ' + randurl + ' '  + hashthi)
Out[12]:
{'contributors': None,
 'coordinates': None,
 'created_at': 'Mon Aug 10 14:45:58 +0000 2015',
 'entities': {'hashtags': [{'indices': [131, 138], 'text': 'gaming'}],
  'symbols': [],
  'urls': [{'display_url': 'reddit.com/r/gaming/comme…',
    'expanded_url': 'http://www.reddit.com/r/gaming/comments/3ggi8h/i_dont_intend_for_this_post_to_get_anywhere_this/',
    'indices': [108, 130],
    'url': 'http://t.co/pXAOfvf7oI'}],
  'user_mentions': []},
 'favorite_count': 0,
 'favorited': False,
 'geo': None,
 'id': 630751950212984832,
 'id_str': '630751950212984832',
 'in_reply_to_screen_name': None,
 'in_reply_to_status_id': None,
 'in_reply_to_status_id_str': None,
 'in_reply_to_user_id': None,
 'in_reply_to_user_id_str': None,
 'is_quote_status': False,
 'lang': 'en',
 'place': None,
 'possibly_sensitive': False,
 'retweet_count': 0,
 'retweeted': False,
 'source': '<a href="http://brobeur.com" rel="nofollow">brobeurtweet</a>',
 'text': "I don't intend for this post to get anywhere, this is just a PSA for anyone who plays the game Organ Trail. http://t.co/pXAOfvf7oI #gaming",
 'truncated': False,
 'user': {'contributors_enabled': False,
  'created_at': 'Sat Mar 30 01:44:05 +0000 2013',
  'default_profile': False,
  'default_profile_image': False,
  'description': 'Video Game Development. #gamedev #linux #getsdrawn',
  'entities': {'description': {'urls': []},
   'url': {'urls': [{'display_url': 'brobeur.com',
      'expanded_url': 'http://brobeur.com',
      'indices': [0, 22],
      'url': 'http://t.co/KRO9XPRA01'}]}},
  'favourites_count': 7,
  'follow_request_sent': False,
  'followers_count': 565,
  'following': False,
  'friends_count': 632,
  'geo_enabled': False,
  'has_extended_profile': False,
  'id': 1315550370,
  'id_str': '1315550370',
  'is_translation_enabled': False,
  'is_translator': False,
  'lang': 'en',
  'listed_count': 16,
  'location': 'Hamilton, New Zealand',
  'name': 'BroBeur.com',
  'notifications': False,
  'profile_background_color': 'C0DEED',
  'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png',
  'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png',
  'profile_background_tile': False,
  'profile_banner_url': 'https://pbs.twimg.com/profile_banners/1315550370/1431862528',
  'profile_image_url': 'http://pbs.twimg.com/profile_images/487211737284739072/HBzP949-_normal.png',
  'profile_image_url_https': 'https://pbs.twimg.com/profile_images/487211737284739072/HBzP949-_normal.png',
  'profile_link_color': '300808',
  'profile_sidebar_border_color': 'C0DEED',
  'profile_sidebar_fill_color': 'DDEEF6',
  'profile_text_color': '333333',
  'profile_use_background_image': True,
  'protected': False,
  'screen_name': 'brobeur',
  'statuses_count': 656,
  'time_zone': 'Auckland',
  'url': 'http://t.co/KRO9XPRA01',
  'utc_offset': 43200,
  'verified': False}}
In [13]:
my_bot.auto_rt("#gamejams", count=1)
error: Twitter sent status 403 for URL: 1.1/statuses/retweet/630633343676194816.json using parameters: (oauth_consumer_key=KdTAefyvNji8T1SjMLPKffkpP&oauth_nonce=7184996403309253260&oauth_signature_method=HMAC-SHA1&oauth_timestamp=1439217960&oauth_token=1315550370-HfN0yyyApoowMKSL9SZKiHSezT77LveocL3e3SI&oauth_version=1.0&oauth_signature=d3dfx65hmHl5jTd33zdVZEAsCcg%3D)
details: {'errors': [{'code': 328, 'message': 'Retweet is not permissible for this status.'}]}
In [67]:
my_bot.auto_follow("#gamedev", count=1)
followed abinash_assam
In [ ]:
 

curschopanda

In [159]:
import pandas as pd
import random
import json 
import random
import requests
import bs4
#import matplotlib.pyplot as plt

Current School Panda

Working with directory school data

Creative Commons in all schools

This script uses a csv file from Creative Commons New Zealand and csv file from Ministry of Education.

The ccnz csv file contains schools names that have cc licence, type of licence,

The Ministry of Education csv file contains every public school in New Zealand and info about them.

Standards for website addresses - if school name ends with school then cut it from name and add to . eg Horowhenua Collage horowhenua.collage.nz not horowhenuacollage.school.nz

Auckland Girls Grammar School

aucklandgirlsgrammar.school.nz not aucklandgirlsgrammarschool.school.nz

Everyschool has their own domain name and Linux server hosting the site. Private/Public keys. Static site, git repo. Nikola blog.

What made you choose that particular Creative Commons licence?

I like the CC:BY licence because it offers the most freedom to people.

I am not a fan of licenses that restrict commercial use. I believe everyone should be able to do what the like with my work with minimal interference.

If I could I would remove non-commercial licenses.

In the early days of my art blogging I would license under cc nc. This was wrong and I later changed this to a cc by licence.

With my photography I once had a photo I taken in the newpaper. It made the front page. I was offered money and seeked permission. I was fine with it of course - the license allows this. At the bottom of the photo it read: PHOTO: William Mckee. Perfect.

The only thing I ask is they attribute.

I like the idea of sharealike but at the end of the I really don't care and would hate to chase down people to license it wrong. Sure, I don't like it that people could take my stuff and make it not open. I think everything should be open and free.

My art site - artcontrol.me is currently down but when it was up I licenced the site under a cc:by. Elements of the site are still up - such as my YouTube channel.

I attended art school in Wellington - The Learning Connexion. My focus was on drawing and painting. I taught myself programming on the bus to art school. Even when I was drawing on the easel I would be 'drawing' python code. During breaks I would often get my laptop out.

I volunteered at Whaihanga Early Learning Centre. I spend the majority of my time there in the art area doing collabarth works with others. Oil Pastel, coloured pencil and pencil were my mediums of choice. Sometimes I would use paint, but it's quite messy.

Copyright shouldn't be default. Apply and pay if you want copyright. CC license by default. That will sort the world.

In [160]:
crcom = pd.read_csv('/home/wcmckee/Downloads/List of CC schools - Sheet1.csv', skiprows=5, index_col=0, usecols=[0,1,2])

Compare the schools on List of CC schools with list of all public/private schools.

Why shouldn't it be default for all public schools licence to be under a Creative Commons BY license?

In [161]:
#crcom
In [162]:
aqcom = pd.read_csv('/home/wcmckee/Downloads/List of CC schools - Sheet1.csv', skiprows=6, usecols=[0])
In [163]:
aqjsz = aqcom.to_json()
In [164]:
dicthol = json.loads(aqjsz)
In [165]:
dschoz = dicthol['School']
In [166]:
#dicthol
In [167]:
dscv = dschoz.values()
In [168]:
ccschool = list()
In [169]:
for ds in range(87):
    #print(dschoz[str(ds)])
    ccschool.append((dschoz[str(ds)]))
In [170]:
schccd = dict()
In [171]:
scda = dict({'cc' : True})
In [172]:
sanoc = dict({'cc' : False})
In [173]:
#schccd.update({ccs : scda})
In [174]:
for ccs in ccschool:
    #These schools have a cc license. Update the list of all schools with cc and value = true.
    #Focus on schools that don't have cc license.
    #Filter schools in area that don't have cc license.
    #print (ccs)
    schccd.update({ccs : scda})
In [175]:
ccschz = list()
In [176]:
for dsc in range(87):
    #print (dschoz[str(dsc)])
    ccschz.append((dschoz[str(dsc)]))
In [177]:
#Append in names of schools that are missing from this dict. 
#Something like
#schccd.update{school that doesnt have cc : {'cc' : False}}
#schccd

Cycle through only first 89 values - stop when reaching : These are schools that have expressed an interest in CC, and may have a policy in progress.

New spreadsheet for schools in progress of CC license. Where are they up to? What is the next steps?

Why are schools using a license that isn't CC:BY. They really should be using the same license. CC NC is unexceptable. SA would be OK but majority of schools already have CC BY so best to go with what is common so you don't have conflicts of licenses.

In [178]:
noclist = pd.read_csv('/home/wcmckee/Downloads/Directory-School-current.csv', skiprows=3, usecols=[1])
In [179]:
webskol = pd.read_csv('/home/wcmckee/Downloads/Directory-School-current.csv', skiprows=3, usecols=[6])
In [180]:
websjs = webskol.to_json()
In [181]:
dictscha = json.loads(websjs)
In [ ]:
 
In [182]:
numsweb = dictscha['School website']
In [183]:
lenmuns = len(numsweb)
In [184]:
#for nuran in range(lenmuns):
#    print (numsweb[str(nuran)])
In [185]:
#noclist.values[0:10]
In [186]:
aqjaq = noclist.to_json()
In [187]:
jsaqq = json.loads(aqjaq)
In [188]:
najsa = jsaqq['Name']
In [189]:
alsl = len(najsa)
In [190]:
allschlis = list()
In [191]:
for alr in range(alsl):
    allschlis.append(najsa[str(alr)])
In [192]:
#allschlis
In [193]:
newlis = list(set(allschlis) - set(ccschool))
In [194]:
empd = dict()

Create restfulapi of schools thaat have cc and those that don't

Merge two dicts together. Both are {name of school : 'cc' : 'True'/'False'}

In [ ]:
 
In [195]:
sstru = json.dumps(schccd)
In [196]:
for newl in newlis:
    #print (newl)
    empd.update({newl : sanoc})
In [197]:
empdum = json.dumps(empd)
In [203]:
trufal = empd.copy()
trufal.update(schccd)
In [207]:
trfaj = json.dumps(trufal)
In [209]:
savjfin = open('/home/wcmckee/ccschool/index.json', 'w')
savjfin.write(trfaj)
savjfin.close()
In [200]:
#savtru = open('/home/wcmckee/ccschool/cctru.json', 'w')
#savtru.write(sstru)
#savtru.close()
In [148]:
#for naj in najsa.values():
    #print (naj)
#    for schk in schccd.keys():
        #print(schk)
#        allschlis.append(schk)
In [149]:
#for i in ccschz[:]:
#    if i in allschlis:
#        ccschz.remove(i)
#        allschlis.remove(i)
In [150]:
#Cycle though some schools rather than everything. 
#Cycle though all schools and find schools that have cc 
#for naj in range(2543):
    #print(najsa[str(naj)])
#    for schk in schccd.keys():
#        if schk in (najsa[str(naj)]):
            #Remove these schools from the list
#            print (schk)
            
In [ ]:
 
In [ ]:
 

libedugov

Library Education Govt

Script to deal with rss xml feed of library.education.govt.nz

In [25]:
import requests
import json
import xmltodict
import bs4
In [2]:
reqlib = requests.get('https://library.education.govt.nz/rss/highlights')
In [3]:
reqlibt = reqlib.text
In [4]:
libtd = xmltodict.parse(reqlibt)
In [5]:
rslen = len(libtd['rss']['channel']['item'])
In [6]:
#rslen
Out[6]:
10
In [7]:
wrapdict = dict()
In [19]:
#print (libtd['rss']['channel']['item'][0])
OrderedDict([(u'title', u'Best schools in Auckland: 24 things to know before you choose a school'), (u'link', u'https://library.education.govt.nz/blogs/best-schools-auckland-24-things-know-you-choose-school'), (u'description', u'<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even" property="content:encoded"><p>By Simon Wilson<br />\n\tMetro Jul/August 2015: 40-49 <em>(article)</em></p>\n<p><span style="color: rgb(48, 48, 48); line-height: 20px;">Presents tables comparing Auckland secondary schools. Looks at school leaver qualifications and NCEA results, including University Entrance and Scholarship rates. Takes into account school decile, and whether Cambridge exams or the International Baccalaureate are offered as well as or instead of NCEA. Shows the new 2015 deciles.</span></p>\n<p><a href="http://library.education.govt.nz/cgi-bin/koha/opac-reserve.pl?biblionumber=51571" style="line-height: 18.5714282989502px; color: rgb(0, 120, 206); outline: dotted thin; outline-offset: -2px; background-color: rgb(255, 255, 255);">Request item</a></p>\n</div></div></div>'), (u'pubDate', u'Wed, 01 Jul 2015 20:46:34 +0000'), (u'dc:creator', u'tim.admin'), (u'guid', OrderedDict([(u'@isPermaLink', u'false'), ('#text', u'10833 at https://library.education.govt.nz')]))])
In [30]:
for libi in range(rslen):
    msjobdic = dict()

    #print (libtd['rss']['channel']['item'][libi]['title'])
    msjobdic.update({'title' : (libtd['rss']['channel']['item'][libi]['title'])})
    msjobdic.update({'link' : (libtd['rss']['channel']['item'][libi]['link'])})
    soupdes = bs4.BeautifulSoup(libtd['rss']['channel']['item'][libi]['description'])
    
    msjobdic.update({'description' : soupdes.text})
    msjobdic.update({'jsonlen' : libi})
    #findict = dict()
    
    #wrapdict.update({'title' : (libtd['rss']['channel']['item'][libi]['title'])})
    #totlen = len(libi)
    #for tes in range(totlen):
    #    wrapdict.update({'title' : txtspli[tes][1]})
    #findict.update({txtspli[0][0] : txtspli[0][1]})
    msjobz = msjobdic.copy()
    msjobz.update(msjobdic)
    
    wrapdict.update({libi : msjobz})
    #jsmsdob = json.dumps(wrapdict)
    
    
    #jslibi.update({rslen : msjobz})
    #jsmsdob = json.dumps(wrapdict)
Best schools in Auckland: 24 things to know before you choose a school
Tauira: Māori methods of learning and teaching
Dangerous liaisons
Bully beef
Could do better
What Candy Crush Saga teaches us about motivating employees
Influential Aucklanders in education
Explaining the achievement gap between indigenous and non-indigenous students: an analysis of PISA 2009 results for Australia and New Zealand
Panel-beaters don't have to learn Shakespeare
Leaders as decision architects
In [32]:
savjslib = json.dumps(wrapdict)
In [35]:
savjfin = open('/home/wcmckee/github/wcmckee.com/output/moelib/index.json', 'w')
savjfin.write(savjslib)
savjfin.close()
In [ ]: