![]() |
In earlier times I was much involved in designing learning materials for production by manual typing and printing, transitioning to production with word processors and other computer software, then to partly online or blended, and eventually to fully online. A transition over those decades to paperless, or almost paperless, during which I acquired a deeply ingrained ambivalence about computers and ICT. Did computers save time and reduce workloads, or waste time and increase workloads?
Of course, many years of almost fantastic improvements in performance, reliability and software design have minimised the 'waste time and increase workloads' factor. But from time to time I encounter irritating and frustrating reminders, to be discussed in this musing about some recent personal incidents in the 'waste time and increase workloads' category. Although these incidents are not especially noteworthy or unusual, and may be similar to incidents encountered by many members of HERDSA, the important part of this musing is how such incidents may provoke attitudinal changes and new cautions about artificial intelligence, or 'AI'.
The first frustrating incident was in mid-January, during my work for WA's Teaching and Learning Forum[1]. After some years of normal connecting to the website (wand.edu.au) used for TL Forum communications with presenters, up popped 'Can't securely connect to this page ... This might be because the site uses outdated or unsafe TLS security settings. If this keeps happening, try contacting the website's owner.' Meanwhile, numerous other websites that I use frequently all gave normal connecting, and no other TL Forum workers were encountering problems with their access to wand.edu.au. After about 15 hours, normal connecting returned for wand.edu.au, so I thought it was only a temporary problem with that site. A relief, it was a busy time for TL Forum organising. But within a day, it happened again: 'Can't securely connect...' and the Apple equivalent, '... can't establish a secure connection...'. Sadly, it was not a temporary problem. The recommended action, 'try contacting the website's owner', which I tried via the edu.au domain registrar and a 'whois' search[2], lead into an ownership maze. I tried the online help pages by Apple, Microsoft, Bigpond and Netgear (provider of our home's ADSL modem-router-WiFi base station), with no clues forthcoming. However, various blogs suggested some relationship between 'Can't securely connect...' and the problem known as 'Weak Diffie-Hellman and the Logjam Attack'[3]. Oh well, that's beyond me, so I gave up, and turned to the usual last resort, which is re-install or upgrade all software at my end. After doing that for Safari and OSX Yosemite on my Mac, the next to do was Netgear[4]. Rather surprisingly, success! Netgear firmware upgrade from V1.1.00.20 to V1.1.00.26 solved the problem, though it did nothing to explain it, or sooth my irritation over a time wasting that totaled perhaps 12-15 hours spread over four days, or dissuade me from considering this to be an instance of 'AI' that was not so intelligent.
The next time wasting incident related to a very routine action during my copy editing for the journal Issues in Educational Research[5]: look up an email address for an author. After failing to find the address within the usual minute or two, I emailed the co-authors to inquire. But my normally very reliable outgoing mail server, mail.bigpond.com, refused to accept, telling me that 'The server response was: '[hexadecimal number omitted] Message content rejected due to suspected spam'[6]. After time wasting experiments, I found that the 'suspected scam' was due to a URL that I had included as I thought it was relevant for the co-authors to know. The first part of the URL was https://www.xxxxxxxx.com/ (I'm using xxxxxxxx to conceal the identity of the author - it is not a word associated with spam). Bigpond rejected the full URL and also rejected 'www.xxxxxxxx.com' and 'xxxxxxxx.com', but accepted 'https://www.xxxxxxxx' and 'xxxxxxxx'. Only an hour wasted in testing and identifying, but the incident prompted larger questions. Firstly, I did not know that Bigpond was reading the body text of my emails to look for spam. That is very irritating. Secondly, what does this incident say about the quality of the AI underlying Bigpond's spam checker?
Another two recent time wasting incidents also concerned email traffic for Issues in Educational Research. Two of the journal's associate editors who use Gmail addresses provided by Google encountered some problems with non-arrival of emails containing new submissions. One advised, 'Re: 2 missing articles. I found them in my SPAM folder ... have discovered that a Gmail mailbox search does not find anything in the SPAM folder.' That is a little surprising, one could expect Gmail to use advanced AI for classifying email as spam (and for advising the recipient, just in case the AI was not smart enough).
In another incident, Gmail rejected some IIER submission emails before invoking its spam test. The Mailman list software which distributes IIER submissions to our associate editors gave me, as list administrator, the following terse reject by Gmail concerning a submission from a prospective author with email address @yahoo.com.in, 'host gmail-smtp-in.l.google.com ... SMTP error from remote mail server after end of data: ... Unauthenticated email from yahoo.co.in is not accepted due to domain's DMARC policy. Please contact the administrator of yahoo.co.in domain if this was a legitimate mail ...'. Google's Gmail Help pages [7] are comprehensive, generally easy to follow, have good referencing to advanced technical information, and the purposes are usually commendable, such as 'To help fight spam and abuse, Gmail uses email authentication ...', and '... you can help combat phishing to protect users and your reputation.' However, little is said about evidence concerning the value of DMARC's [8] contribution to these commendable purposes: is it 'small', 'middling', substantial', 'don't know'? Also, little is said about the underlying AI procedures for identifying unauthenticated email that may be spam, and how reliable these may be.
Could it be a little unfair to suggest that the AI geeks who create programs to combat spam and phishing should be providing more evidence about their efficacy? Have I an acquired bad habit after many years of reviewing research journal submissions in educational technology and educational research generally, namely asking about evidence for the utility, efficacy, stakeholder benefits, adoptability, sustainability, etc., of your innovation, your research findings, your new method for teaching your subject, etc.? Why become irritable about 'time wasting', when I could think 'lifelong learning' about very interesting and important ICT matters? Have I forgotten the myriad ways for ICT to save time and reduce workloads? Am I citing incidents that are not related to AI as the term is understood in contemporary AI researcher and developer communities?
To return to the important part of this musing, which is how such incidents may provoke attitudinal changes by time wasting and creating more work, thereby reducing one's sense of trust in computers. The reduction in trust may be countered in part by reflective questions, as in the preceding paragraph, but we are entering an era of more and more trust in computers being demanded, owing to AI advancing into more and more complex roles, such as driverless vehicles[9], humanoid robots[10], computer marking of NAPLAN essays[11], and assisting in judgment of minor disputes in civil law[12]. If the programmers who are creating software for less complex purposes, such as detecting spam and countering it, cannot always get it right, how can we trust the software for very complex roles, as in advanced applications of AI? Many observers, perhaps a little unfairly, mention the phrase 'natural stupidity'. The linking of 'artificial intelligence' and 'natural stupidity' has been around for some years[13], perhaps most frequently in the sentence 'Artificial intelligence is no match for natural stupidity'. Google the exact phrase to obtain evidence. My preferred version is 'There is a fuzzy dividing line between artificial intelligence and the natural stupidity of the programmers who created it'.
Perhaps that is an overly irritable response to the time wasting I have experienced. For a less grumpy perspective on AI, I offer 'Be alert, not alarmed'[14], borrowed shamelessly from a 2002 campaign by the Australian Government (now a forgotten campaign, but you can still Google the exact phrase). For a broader perspective, consider all the non-AI related ways for irritating time wasting by computer software, for example, will we ever get a version of MS Word that is crash proof?
Author: Roger Atkinson retired from Murdoch University in June 2001. His current activities include honorary work on the TL Forum conference series, Issues in Educational Research, and other academic conference support and publishing activities. Website (including this article in html format): http://www.roger-atkinson.id.au/
Note: The version presented here is longer than the print published version, as it includes references that were omitted for space constraint reasons. Please cite as: Atkinson, R. J. (2018). Artificial intelligence versus natural stupidity. HERDSA News, 40(2). http://www.roger-atkinson.id.au/pubs/herdsa-news/40-2.html |