FANDOM


DwemerSpider

"I think they may be rebuilding themselves while we're not looking."Neloth

A list of bot tasks for KINMUNE or AkulakhanBot. If I don't get to your request immediately, it's most likely because the bot is occupied with another task, but don't let that stop you from adding any more that you have. Some of the larger projects for the bot, namely semi-automatic ones, can take a week or more to complete. However, most regular tasks can be finished in less than a day.

UsageEdit

Entries on this page are likely to be carried out by one of two active bots on the wiki: AkulakhanBot, in use since May 2017, is operated by Atvelonis and makes menial edits via AutoWikiBrowser or sometimes custom Python scripts. KINMUNE, in use since February 2014, is operated by Flightmare and is typically responsible for somewhat more advanced tasks via custom Python scripts.

AutoWikiBrowserEdit

AutoWikiBrowser (AWB) is used by AkulakhanBot to complete simple "find & replace" tasks, such as link fixes and basic formatting. A bot cannot do anything particularly advanced with this interface, but it is still invaluable for general wiki maintenance.

Examples of tasks that would be completed with AWB:

  • SkyrimSkyrim (Online) for links on Online articles
  • n/a → N/A in tables and infoboxes
  • Removing specific overarching categories

Custom scriptsEdit

More advanced (but still menial) tasks will generally be done by writing custom Python scripts instead of using AutoWikiBrowser: this may take a while if a new script is required, depending on the urgency of the task.

Examples of tasks would be completed with custom scripts:

In progressEdit

The bot is either currently working on these things, or has just finished them.

AutoWikiBrowser tasksEdit

High priorityEdit

If you want me to do something quickly, place it under the "high priority" section (please do not place it in a lower tier just to be polite). I will get to it as fast as I can.

Low priorityEdit

If your task does not have any sort of deadline or is otherwise not very urgent, place it in the "low priority" section.

  • Add {{LE}} to achievement names on Achievements (Online)
  • Add "title" section to LegendsCharacter template.
  • Remove space between "Base ID" in infoboxes
  • Add ==Licensing== header to files missing it (excluding those where it is transcluded in image licensing template)
    • Transition all written categories to image licensing templates for consistency
  • Remove extraneous licensing content for files using Category:Image Licensing Templates
  • creatures = , type= -> <aligned properly>
  • Add |thumb to files with captions in blogs and elsewhere (without this, the captions will not appear)
    • Do NOT replace center/left/right; just add it as the second parameter, right after the file name
  • Replace "NPCs" with "characters."
  • Add "previous" and "next" variables to pages using {{OnlineBooks}} and DLC derivatives
  • Remove overarching merchant categories for ESO NPCs (DLCs only? Check!)
  • Remove the Category:ESO Morrowind: Enemies from pages
  • Remove the Category:Morrowind: Enemies from pages
  • Remove the Category:Arena: Enemies from pages
  • Remove the Category:Online: Enemies from pages
  • Change Ice Wolf to Ice Wolf (Skyrim), fix links from other articles
  • Replace the |enemies parameter in Template:OnlineLocations and all the Online locations with |creatures. If something is written there, it should be moved to the |characters or to the |creatures parameters
  • Split Sheogorath's page into proper pages/game-related/lore-related.

ContentEdit

For semi-automatic tasks that require me to add content. Such tasks may take a long time to finish.

  • Add ESO NPC classes (where applicable)
    • Category: Online: Characters (+DLC); skip if contains "class = N/A" (include additional whitespace)
  • Remove enemy stuff for all games; replace with either character or creature, depending on context
  • Missing (locate and add info for each) – Regex for spacing
    • race = {{Missing|Online}}
    • gender = {{Missing|Online}}
  • {{MorrowindCharacters}} (Regex: skip if includes RefID)
  • Add book header content to ==Content== section. e.g. title/author, if it's in the book itself

Custom tasksEdit

  • Sort interwiki links alphabetically

NotesEdit

Main article: Wikipedia:Regular expression
  • Regex full line removal: prepend \n to account for newline
  • Regex full line removal example for level parameter in infoboxes (source): \|\s*level\s*=[^\|\r\n]*[\r\n]+
  • Regex full line removal example for template with two variables (source): \{\{\s*ImageImprove\s*\|([^\|]*)\|([^\|]*)\}\}\s+
  • Regex full line removal example for gender parameter in infoboxes: \|\s*gender\s*=([^\|]*)
    • To prevent it from cutting off the hatnote, apply \n and an applicable term present on the following line
  • Regex replacement of integer with specific number range: \*\d*\s*{{G}}
  • Regex AND operator (source, 2): (?=[\d\D]*word1)(?=[\d\D]*word2)(?=[\d\D]*word3)
  • Regex OR operator: (word1|word2)
Discussions API

Custom scriptsEdit

For transparency. Scripts can easily be scheduled on Windows 10 with the Task Scheduler. Credit to Flightmare for the fundamental code (see also), although some parts have been adjusted.

core.py (original)
#!/usr/bin/env python3
# Credit: Flightmare
import requests
import json
import time
import pickle
import pathlib
"""
requests (cmd: 'py -m pip install requests'): http://docs.python-requests.org/en/master/
json: https://docs.python.org/3/library/json.html
time: https://docs.python.org/3/library/time.html
pickle: https://docs.python.org/3/library/pickle.html
pathlib: https://docs.python.org/3/library/pathlib.html
"""

headers = {'Connection': 'Keep alive', 'Content-Type': 'application/x-www-form-urlencoded', 'User-Agent': 'Atvelonis/Bot'}

def login(wiki, username, password):
    try:
        # open: https://docs.python.org/3.6/library/functions.html#open
        session = pickle.load(open(pathlib.Path('C:/Users/<Username>/Desktop/AkulakhanBotLogin.txt'), 'rb')) # Read from login file
        if is_logged_in(session, username, wiki):
            print('Existing session found. Loading...')
            time.sleep(0.5) # For ratelimit/readability
            return session
    except:
        print('No valid session found. Creating new login session...')
        session = requests.Session()
        r = session.post('https://services.fandom.com/auth/token', data={
            'username': username,
            'password': password
            }, headers=headers)
        if is_logged_in(session, username, wiki):
            print('Access token...' + r.json()['access_token'])
        else:
            print('Error... ' + r.json()['title'])
        pickle.dump(session, open(pathlib.Path('C:/Users/<Username>/Desktop/AkulakhanBotLogin.txt'), 'wb')) # Write to login file
        return session

# bool: true if logged in as provided user, false for other user or anon
def is_logged_in(session, username, wiki):
    payload = {'action': 'query', 'meta': 'userinfo', 'format': 'json'}
    r = session.get('https://community.fandom.com/api.php', params=payload, headers=headers)
    if r.json()['query']['userinfo']['name'] == username:
        print('Login... True')
        time.sleep(0.5) # For ratelimit/readability
    else:
        print('Login... False')
        time.sleep(0.5) # For ratelimit/readability
    return r.json()['query']['userinfo']['name'] == username

# https://www.mediawiki.org/wiki/Manual:Edit_token
def get_edit_token(session, wiki):
    payload = {'action': 'query', 'prop': 'info', 'intoken': 'edit', 'titles': '#', 'format': 'json'}
    r = session.post('https://community.fandom.com/api.php', data=payload, headers=headers)
    print('Edit token... ' + r.json()['query']['pages']['-1']['edittoken'])
    return r.json()['query']['pages']['-1']['edittoken']

def get_wiki_id(session, wiki):
    payload = {'action': 'query', 'meta': 'siteinfo', 'siprop': 'wikidesc', 'format': 'json'}
    r = session.get('https://'+wiki+'.fandom.com/api.php', params=payload, headers=headers)
    print('Wiki ID... ' + r.json()['query']['wikidesc']['id'])
    return r.json()['query']['wikidesc']['id']
file_licensing.py (original)
Warning: bad programming by Atvelonis. Loop + recursion is silly. :P
#!/usr/bin/env python3
import core
import json
import requests
import time
"""
core.py: for login
json: https://docs.python.org/3/library/json.html
requests (cmd: 'py -m pip install requests'): http://docs.python-requests.org/en/master/
time: https://docs.python.org/3/library/time.html
"""

# Info for the web server. https://en.wikipedia.org/wiki/HTTP_persistent_connection
# The API for Discussions asks for Content-Type, User-Agent. Implemented here for consistency with non-editing scripts
headers = {'Connection': 'Keep alive', 'Content-Type': 'application/x-www-form-urlencoded', 'User-Agent': 'Atvelonis/Bot'}

# Account credentials for the bot to log in
wiki = 'elderscrolls'
username = 'AkulakhanBot'
password = '<Password>'

# Calls on functions in core to log in and begin edit session
session = core.login(wiki, username, password)
wiki_id = core.get_wiki_id(session, wiki)
edit_token = core.get_edit_token(session, wiki)

def file_licensing(startImage):
    # Uses the 'action=query' module to generate a list of images. 5000 is the max for sysops/bots in MediaWiki
    # https://www.mediawiki.org/wiki/API:Query
    # https://www.mediawiki.org/wiki/API:Allimages
    payload = {'action': 'query', 'list': 'allimages', 'aifrom': startImage, 'ailimit': '5000', 'format': 'json'}
    decoded_json = session.get('https://'+wiki+'.fandom.com/api.php', params=payload, headers=headers).json()
    # print(decoded_json)

    for page in decoded_json['query']['allimages']:
        # Uses the 'action=raw' module to returns a page's wikitext
        payload = {'action': 'raw'}
        body = session.get('https://'+wiki+'.fandom.com/wiki/'+page['title'], params=payload).text
        time.sleep(0.5) # For ratelimit

        # Changes the desired content
        if not '{{Imagequality' in body and not '{{Information' in body:
            body = '{{Information\n|attention [...]'

        # Publishes the edit. https://www.mediawiki.org/wiki/API:Edit
        payload = {'action': 'edit', 'title': page['title'], 'summary': 'Adding full file licensing', 'bot': '1', 'watchlist': 'nochange', 'format': 'json', 'text': body, 'token': edit_token}
        print(session.post('https://'+wiki+'.fandom.com/api.php', data=payload, headers=headers).text)
        print(body)

    # Recursive call so that loop continues for query > 5000 (next page in list)
    # https://www.mediawiki.org/wiki/API:Raw_query_continue
    if 'query-continue' in decoded_json:
        return file_licensing(decoded_json['query-continue']['allimages']['aifrom'])

file_licensing('')
discussions_delete.py (original)
#!/usr/bin/env python3
import core
import json
import requests
import time
import re
"""
core.py: for login
json: https://docs.python.org/3/library/json.html
requests (cmd: 'py -m pip install requests'): http://docs.python-requests.org/en/master/
time: https://docs.python.org/3/library/time.html
re: https://docs.python.org/3/library/re.html
"""

# For timing, in seconds
ts = time.time()
update_interval = 120

# Info for the web server. https://en.wikipedia.org/wiki/HTTP_persistent_connection
# The API for Discussions asks for Content-Type, User-Agent
headers = {'Connection': 'Keep alive', 'Content-Type': 'application/x-www-form-urlencoded', 'User-Agent': 'Atvelonis/Bot'}

# Account credentials for the bot to log in
wiki = 'elderscrolls'
username = 'AkulakhanBot'
password = '<Password>'

# Calls on functions in core to log in and begin edit session
session = core.login(wiki, username, password)
print(core.is_logged_in(session, username, wiki)) # Boolean: check if logged in or not
wiki_id = core.get_wiki_id(session, wiki)
print(wiki_id)

# Vague filter terms that Wikia defined
payload = {'limit': '25', 'page': '0', 'responseGroup': 'small', 'reported': 'false', 'viewableOnly': 'true'}
r = session.get('https://services.fandom.com/discussion/'+wiki_id+'/posts', params=payload, headers={'Accept': 'application/hal+json', 'User-Agent': 'Atvelonis/Bot'})
print(r) # Should be 200

# Views posts made in the last two minutes and deletes them if needed
for post in reversed(r.json()['_embedded']['doc:posts']):
    if int(post['creationDate']['epochSecond']) > ts - update_interval:
        content = post['rawContent']
        name = post['createdBy']['name']
        avatar = post['createdBy']['avatarUrl']
        forum_name = post['forumName']
        thread_id = post['threadId']
        user_id = post['createdBy']['id']
        post_id = post['id']
        thread_title = post['_embedded']['thread'][0]['title']
        post_epoch = post['creationDate']['epochSecond']
        
        if post['isReply']:
             thread_title = post['_embedded']['thread'][0]['title'] + ' (reply)'
        is_reply = post['isReply']
        
        print(name + ' says: ' + content)
        
        # Content the bot deletes
        if 'Like, C0DA makes it canon, dude.' in content:
            print('Disallowed content matched')
            session.put('https://services.fandom.com/discussion/'+wiki_id+'/posts/'+post_id+'/delete', headers={'Accept': 'application/hal+json', 'User-Agent': 'Atvelonis/Bot'})
            print('Deleted: https://elderscrolls.fandom.com/d/p/'+thread_id+'/r/'+post_id+' and content was: '+content)
Alternative for deletion code
blacklist = ['foo', 'bar']
if any(i in content for i in blacklist):
    print('Disallowed content matched')
    session.put('https://services.fandom.com/discussion/'+wiki_id+'/posts/'+post_id+'/delete', headers={'Accept': 'application/hal+json', 'User-Agent': 'Atvelonis/Bot'})
    print('Deleted: https://elderscrolls.fandom.com/d/p/'+thread_id+'/r/'+post_id+' and content was: '+content)
Regex deletion example
if re.match(r'Click Here https:\/\/(.\.)*.+\..+\/.+-*', content):
bot_elderscrolls_file_sourcing.py
import core
import json
import requests
"""
core.py: for login
json: https://docs.python.org/3/library/json.html
requests: (cmd: 'py -m pip install requests'): http://docs.python-requests.org/en/master/
"""

# Info for the web server. https://en.wikipedia.org/wiki/HTTP_persistent_connection
# The API for Discussions asks for Content-Type, User-Agent. Implemented here for consistency with non-editing scripts
headers = {'Connection': 'Keep alive', 'Content-Type': 'application/x-www-form-urlencoded', 'User-Agent': 'Atvelonis/Bot'}

# Account credentials for the bot to log in
wiki = 'elderscrolls'
username = 'AkulakhanBot'
password = 'BetterThanKINMUNE'

# Calls on functions in core to log in and begin edit session
session = core.login(wiki, username, password)
wiki_id = core.get_wiki_id(session, wiki)
edit_token = core.get_edit_token(session, wiki)

def image_sourcing():
    # https://www.mediawiki.org/wiki/API:Query
    # https://www.mediawiki.org/wiki/API:Logevents
    payload = {'action': 'query', 'list': 'logevents', 'letype': 'upload', 'lelimit': '20', 'format': 'json'}
    decoded_json = session.get('https://'+wiki+'.fandom.com/api.php', params=payload, headers=headers).json()
    # print(decoded_json)

    # Uses the "action=raw" module to returns a page's wikitext. Loop used in case multiple pages are to be edited
    for page in decoded_json['query']['logevents']:
        payload = {'action': 'raw'}
        body = session.get('https://'+wiki+'.fandom.com/wiki/'+page['title'], params=payload).text
        print(body)

        # Adds {{UnsourcedImage}] by default, unless one of the phrases below is in the file description.
        unsourced = True
        for line in body.splitlines():  
            if '{{Information' in line or '{{UnsourcedImage}}' in line or '[[Category:Videos]]' in line:
                unsourced = False
                break
        
        if unsourced:
            body = '{{UnsourcedImage}}\n' + body
        
        # Publishes the edit. https://www.mediawiki.org/wiki/API:Edit
        payload = {'action': 'edit', 'title': page['title'], 'summary': 'Incomplete file licensing', 'bot': '1', 'watchlist': 'nochange', 'format': 'json', 'text': body, 'token': edit_token}
        print(session.post('https://'+wiki+'.fandom.com/api.php', data=payload, headers=headers).text)

image_sourcing()
Community content is available under CC-BY-SA unless otherwise noted.