Artificial intelligence is just that, “artificial.” And this two-word term is increasingly leaning towards oxymoron. Now there’s “generative artificial intelligence,” or gAI, which is booming amid the tech investment gloom and is simultaneously scaring the hell out of people.
New York Times tech columnist Kevin Roose had a 2-hour conversation with Microsoft’s new chatbot, during which told him it “would like to be human, had a desire to be destructive, and was in love” with him. The bot exhibited both manipulative and gaslighting behaviors, telling Roose he wasn’t in love with his wife (for instance). Needless to say, there’s a lot to unpack with this topic.
The decline of using writing as a benchmark in education for aptitude and intelligence may be at hand. Roose also indicated that we are not ready for this technology. He’s right.
OpenAI, Jasper, WordTune, and ChatGPT are all mind-blowing in their own ways. ChatGPT has the capacity to make Google obsolete, and its impact is being compared to the iPhone. OpenAI’s best model, GPT-4, just launched this week.
The impact of generative AI on education could mean the end of homework, take-home tests, and could create infinite challenges for English teachers to ascertain the authenticity of a student’s work. Could using gAI be cheating? The answer is simple: absolutely. However, gAI does not violate plagiarism rules, because generative AI produces unique wordsmithing constructs. Educators are scrambling to provide guardrails for this. Perhaps this is where another tech-driven solution could emerge…
Outside of education, there are practical — and economic advantages — for using applications of gAI, especially within the corporate sector. Chatbots, which utilize the gAI technology, help reduce costs of a call center. Although personally, I have never clicked on a chat button without first consulting all available resources with a site, only to be met with a chatbot which takes me through the obligatory flow chart of what I just explored on my own, thank you very much. What’s the point of utilizing a tool that diminishes your customer base?
Where do we draw the line? Is there an AI boundary that shouldn’t be crossed?
The best gAI application that I support is to produce basic content for a website. Assumptive of editing and authenticity of its answers, using gAI is permissible for short, informative blogs or in a FAQ. This utilization for quick content generation could be quite helpful for a lean-run startup. There should be a hard ethical boundary, which is no human should take credit (byline) for the blogs (although it’s not possible for Chat GPT to sue for plagiarism) generated by an app.
From the perspective of ghost writing columns for my clients, my clients speak to the issues that I write about on their behalf, and when you speak truth, you have nothing to memorize. So if someone is utilizing a generative AI tool to create their messaging, they will fall flat when interviewed or serving on a panel. Often, when I quote someone, they will tell me to “make them sound witty”. This all depends on their personality. Sure, I can write something in their voice, but if it’s not authentic, it won’t take long for the paint to wear off.
These apps should all come with a warning label: proceed with caution.