ChatGPT: Writing Trials

Good day:


I haven't sent ChatGPT anything for a while. But it is really in need of another test. Programming is a tough challenge for an llm. So we’ll go back to the roots today. It won’t be a simple chatbot prompt, but a CREATIVE WRITING prompt.


Here’s the prompt:



Simple enough, a few specific instructions that are more along the lines of modifying your text, rather than generating it. So what are the results?

Poem 1:



Poem 2:


Poem 3:



Reviews:

I thought poem one was very boring. But let’s start with the positives. Sometimes when I wrote ChatGPT, it couldn’t quite get a rhyming scheme down. This was a simpler one, but it’s been done. Great. Some people are very interested in prompt engineering. That is a great technique for it, in my opinion. Quickly review what you have received. If it passes, move on.


But overall this poem hadn’t made a lot of sense syntactically. These lines were not great. It was on the right topic, it followed the rules, but a human could do a better job. I could have typed a better poem as fast as I can type, you know.


The second poem seemed to make a little less sense, including a portion where the narrator speaks to their coffee cup. It made me wonder whether this thing is alive. Safe to say it isn’t. But it’s on the topic, in ten lines, and mildly amusing. It was probably not funny. This was the worst of the three in my opinion.


Third poem. This one was interesting. Taking away specific instructions gave it the ability to write with a little more coherence. However, writing a narrative poem can be tough. We all want to know if AI’s can write interesting narratives. Safe to say it was not interesting, but it was alright. It was amusing. It did seem to repeat sort of whatever I wrote. It had seemed to understand the drama surrounding a player suddenly being found to be cheating, a little deeper than I explained. That would be a positive that I had seen.


Wrapping things up, it’s always fun to see what they can do, and how they perform. Especially if you are into that sort of thing. AI is such a big topic nowadays. And a lot of people are interested in what they can do.


Lately I wrote a couple things being critical of their capabilities. What struck me, is that they were not quite what they were being advertised to be. In fact, I wondered if I was under a cookie attack. That was certainly the case. I was even reached in my email about chatbots, that I had opened once, etc. I was quite sure this was the case because I was just researching them for a potential article.


But at the same time, it was a major topic. And a lot of people had the same experience. It is safe to say that the marketing techniques, of OpenAI, Anthropic,  whoever else, effect all of us. It is the case you generally hear about them through major ad campaigns, and on the nightly news, or in major newspapers. 


My concern grew deeper as ChatGPT5 was scheduled for release. It seemed that they felt there was not anything it couldn’t do. As I was testing it, I found that was not the case, almost to the point of tortious liability.


When I was younger and I logged on Fortnite, it was certainly the case it all of the sudden purchased things. I was not so naive, and I was able to return them. So I guess I was more skeptical than most people.


Of course with what I was hearing about the Microsoft lay-offs, and about chatbots leading people to making rash, dangerous decisions, I thought I might try to speak against the trend for a second, and really decided to write something.


In particular about ChatGPT, as I’d been learning to code better, I found it couldn’t help me. So I made that the subject of the article. If it couldn’t help an average programmer, how could it really help a 100x coder?


In reality, ChatGPT 5 was a major disappointment. It was anything other than a revolution. So, huge sigh of relief, I was right. And what is sort of the silver lining, if you’re a fan of AI? I think the reality is, over time, it is getting better. One thing I’d like to point out, and I’d put it like this — wow, OpenAI’s first reasoning model just came out last year. They haven’t taken over the world yet!


What’s motivated me as a programmer, is that I think we have fundamentally misunderstood AI. As a writer, I noticed that we have always been fascinated with artificial intelligence. It is something that does everything for us. It is a robot that grabs our food, it is a CP30 in Star Wars, and it’s a little talking console on a ship, that does everything, even runs an emergency procedure, as your ship is falling to Earth.


One time I felt that I had spoken a little too plainly about something, and maybe it was for the attention. That was Starbucks coffee. But I just noticed that as a customer, I had been encouraged to have a deep trust. This reminded deeply of the 1957 spaghetti hoax, and I decided to write about it. The idea of the ‘coffee tree’, the idea of ‘100% ethical coffee’, and the idea of 100% good taste (at the same time), sort of bothered me. To be honest, I’d never really liked the taste, so writing it, it honestly felt like I was just finally complaining. Anyways, it probably won’t hurt you. But McDonald’s had a lot of chemicals in it previously. It probably would have hurt you, if you kept having it. We obviously don’t trust it fully now. Starbucks had similar production methods, and we forget in the midst of trusting it, it is the number one source of the most addictive drink in America. It is precisely a fast-food, or fast-format consumption product company. There is no reason to trust it fully. And just like ChatGPT, it too has fallen out of favour with people recently, more so over the last year. But that has nothing to do with me.

So was this also a hoax? It’s safe to say that it is a hoax, if you thought AI was suddenly going to run your life, or code originally like Mark Zuckerberg or Elon Musk. No one chatbot deserves one trillion dollars right now.


Working on Selections, I wondered if we had misunderstood what AI is. The first AIs were assistants for math, which did it with 100 percent precision. These are now known as calculators. After that, LLMs (Language models) became all the rage, and they are best known as TTS (Test-To-Speech) systems.


So AI was traditionally something trying to make things more like what I was telling you about, a robot that can understand you, and do a task for you. In fact the ChatGPT of the past was Google… it was Google Translate, the largest database, and the best tool, to translate your speech into another language.


So what did a Chatbot have to do with all of this? Not a lot! Texting is not a form of speech. But the reasoning model is a profound invention. It is not an LLM in the traditional sense, though that’s just my opinion… but it’s a newer model. It models a reasoning human, and comes up with unique answers, rather than just performing pre-set tasks, like calculations, or translations. 


It is safe to say that a Chatbot combined with this reasoning feature is not the most accurate thing on earth. AI in the traditional sense, as it is normally defined, is something that helps you do things easier, and has 100 percent precision, is something I was interested in. My latest inventions, BubbleAI, RiskAI, Perfect Search, all make things much easier for, and assist you, to do things quicker, and without worrying about making mistakes. 


In my opinion the battle with AI is really the battle with computers, and if we got back to using science and specialized software programs to make life easier, then maybe we would be headed in the right direction. It is not clear to me that we are, if we consider ChatBots to be the jumping off point for all of work, and all of life. OpenAI is the fascinating company that developed a reasoning model. But it is safe to say that their current mode of delivering it doesn’t offer the kind of accuracy we need, and probably the kind of accuracy they can make.


By: Asa Montreaux