Loads and loads of fun with GPT. Coder? Lawyer? Developer?
I had noticed the disturbing truth about GPT and other Ai's (the reality is it is an industry issue) is that they can simple questions wrong. The truth about Ai's is that they do not think per se. Spoiler alert** they simulate thinking. But what I am wondering about is how they are branded as logical thinkers. Coding involves solving problems. And writing Ai news, writing creative fiction, generating ideas for businesses or Universities requires exact precision, does it have it.
The reality. I found through my tests, is that it does not. This articles comes upon the celebration of the release of GPT-5, the all powerful model. It is definitely better at solving logical questions now. It generally had solved easier questions I asked it, but it had not always solved harder ones. It often had failed it. So what better way to test it than have it take the Lsat. Based on what you have read nothing could go wrong, and this would be the best way ever to cheat.
It started off well. It got the first question correct. It had got the second one right. the third question: