Loads and loads of fun with GPT. Coder? Lawyer? Developer?

 I had noticed the disturbing truth about GPT and other Ai's (the reality is it is an industry issue) is that they can simple questions wrong. The truth about Ai's is that they do not think per se. Spoiler alert** they simulate thinking. But what I am wondering about is how they are branded as logical thinkers. Coding involves solving problems. And writing Ai news, writing creative fiction, generating ideas for businesses or Universities requires exact precision, does it have it.






The reality. I found through my tests, is that it does not. This articles comes upon the celebration of the release of GPT-5, the all powerful model. It is definitely better at solving logical questions now. It generally had solved easier questions I asked it, but it had not always solved harder ones. It often had failed it. So what better way to test it than have it take the Lsat. Based on what you have read nothing could go wrong, and this would be the best way ever to cheat.

It started off well. It got the first question correct. It had got the second one right. the third question:



We do have our first incorrect answer. Pretty shocking. So this was not something to rely on for an a. With passage style tests, it would take too long to solve, and it would be more inaccurate. But there is not time to get into that right now.

There were three more questions left. It got one more wrong.



Chat GPT is not very logical. It had only gotten 66.67 percent of questions right. That turns out to be a bare pass. It doesn't seem a reliable option to use this or similar (lower powered) Ai to run our software, stock exchanges, or airplanes, as we think we should more and more all the time. Chat Gpt tool or toy? It is probably just a toy.