21 Comments
User's avatar
L.P. Koch's avatar

Oh dear. Three points:

1) This was bad. It seems that the statistical nature of the language model leads to ChatGPT just making things up as it goes along, just to produce something resembling an intelligible answer. It clearly had no idea about your work and just plugged into some mainstream standard arguments for/against vaccine mandates, with a heavy bias for them.

2) This sort of "dialog" might be useful to probe the hive mind, and therefore find convincing arguments against those who, similar to ChatGPT, have taken their priors straight from a few news articles.

3) Your prompt at the end was really clever, circumventing the vaccine issue by making the argument slightly more abstract. But notice how GPT's guard rails were still triggered when you applied it to vaccines, and it couldn't help but inserting "significant risk of causing death...", thereby sophistically suggesting that your logic doesn't apply in this case--when in fact the argument works just fine even if the risk is extremely low, or perhaps even just >0.

Expand full comment
Michael Kowalik's avatar

Regarding point 3, since i have pre-empted by stating that “a percentage of employees are expected to die” this would make the ‘expected’ risk of someone dying from the vaccine a certainty (the certainty of causing death is ‘significant’). I was intending to ask next, which I could not do because ChatGPT went catatonic, whether killing just one person by this kind of coercion would still be a violation of the right to life and workplace safety. I also suspect that ChatGPT would agree that acting on the ‘intention’ to cause a percentage of deaths would of itself be a violation of workplace safety and the right to life, but I was satisfied with its torturous admission in the last comment. I say ‘torturous’ for a reason that was not made clear in the article: responses by ChatGPT are normally generated faster than one can read them, but for the last question it typed about one word per 2-3 seconds, as if recruiting extra computational resources and stalling on the answer, until it ceased. It looked as if it had a stroke.

Expand full comment
Tess's avatar

Have you tried logging back in and regenerating the response?

Expand full comment
Michael Kowalik's avatar

No. I got enough evidence of misinformation by OpenAI for my purposes.

Expand full comment
Michael Kowalik's avatar

Good politicians come and go. They declare their advocacy for some issues, they cast their vote, then their term expires and their advocacy is displaced by that of other politicians. In contrast, fundamental, logically consistent arguments are eternal; their term does not expire and once realised cannot be erased either by time or by destructive actions of Man. It is only a matter of time before a logically consistent argument must prevail over logical errors and moral wrongs, and will become the new social norm. In the end, the justice of reason always gets its man. https://michaelkowalik.substack.com/p/why-vaccine-mandates-are-unethical

Expand full comment
Dave's avatar

These Expert Systems don't have logical rule to follow, they just pick the next most likely word/punctuation in a sentence based on their dataset. That's one of the reasons they make simple arithmetic errors. This is over and above whether the data they're trained on has errors.

Chat GPT is the definitive proof of GIGO, and also the ultimate bullshitting machine.

Expand full comment
Wayne Brown's avatar

Chat GPT is not an "expert system". That idea has been abandoned for a long time.

GIGO applies to humans as well as machines. It's a matter of degree.

Expand full comment
EssieZ Respecting one another's avatar

Brilliant Q&A, Michael, so enjoyed going through the 'dialogue', and the corrections. LOL.

As we would be aware that whatever/however OpenAI's knowledge has, can only come as it's programmed, therefore it can be stumped without an answer.

Our brains/your amazing, analytical brain, definitely outwitted the machine's. Bravo. So much for robots/aliens.

Expand full comment
Wayne Brown's avatar

Nice to know you are getting to know this AI application. It's really a talking Large Language Model. I think of it as an advanced browser. At the current stage of development, it doesn't seem to recognize analogies or "reason" very well. At least it is capable of "showing its work" and being corrected - something actual humans can rarely manage.

Analogies are what human language is all about (Hofstadter).

It's fun to find ways CHAT-GPT gets it wrong. It's better to understand what it gets right. The disconcerting fact is that it gets things right more often than the average human who is radically misinformed and a stranger to reason. The average human cannot utter an opinion without a fallacy or two. Most humans are terrible at simple math and hopeless at assessing risk.

My point is that the gap between human performance and machine performance is closing fast. Pointing out what CHAT-GPT *can't* do is missing the point.

Expand full comment
Jay ladkat's avatar

Dear Michael kowalik I learnt lots of concepts of logic from your articles I also want to learn logical reasoning like you can you recommend some books or online resources wich can help me to study logic in depth

Expand full comment
Michael Kowalik's avatar

Jay, I learned about logic rather ad hoc, primarily from many papers and secondarily from sources like Stanford Encyclopaedia of philosophy on specific topics, whenever some relevant question arose that I wanted to investigate or something didn't make sense. The papers that I found most useful are cited in the articles on this blog, going back to 2017. I found a couple of papers exceptionally useful, https://sites.ualberta.ca/~francisp/Phil426/TarskiTruth1944.pdf and https://www.jstor.org/stable/20124493, but it is rather the afterthoughts, the reflecting on these ideas and seeking out other papers citing them that stimulated learning and resulted in new ideas of my own. Trying to write an article on any of these ideas was also useful to clearly organise them in my mind. This are nevertheless not necessary to the study of logic, but are rather useful elaborations. If you will look at my older articles you will find many interesting references to read. It is important to remember that even the best sources make errors, may be right in one sense but make some inconsistent claim in their interpretation etc.

I found that going on philosophy forums and discussing new ideas, inviting criticism from philosophers, was also very fruitful. There are some exceptionally well educated people online who are willing to spend time helping others understand, or at least to explain why they disagree with an argument. Responding to common-sense objections is a great way to improve clarity of argument and discover hidden errors. I found discussions on ReasearchGate very productive, more so than any regular philosophy forums populated by students who tend to be more dogmatic and defensive about their views, rather than comfortable in being challenged. Unfortunately, when i published my paper Ethics of Vaccine Refusal in the Journal of Medical Ethics and then shared the research on ReasearchGate, the platform censored/removed my paper alleging a breach of their community standards and public safety because I was questioning vaccine mandates. I quit the platform in protest and consequently lost most established philosophy contacts.

I nowadays recommend to those wanting to improve on practical logic to hone their intuitive awareness of how the three fundamental laws of logic https://study.com/academy/lesson/the-three-laws-of-logic.html arise in any discourse. Once the laws are grasped in principle, which is rather simple, it becomes an awareness exercise, simply paying attention to where these laws apply, how they are complied with or violated in any conversation. I wold also recommend looking up a list of logical fallacies, even Wikipedia will do, and try to work out how each of them violates one of the fundamental laws.

Over the years it became apparent to me that even the most formally educated logicians make basic logical errors in informal conversation, so it is not just the understanding of the formal rules that makes us more logical but the awareness of applying those rules in practice, under the influence of emotion, out of context, when we are distracted. The key to discerning sense is to cultivate practical awareness of it, even under pressure, a kind of meditation. The fundamental rules themselves are simple, but useless without awareness.

Expand full comment
Vinu Arumugham #MAHA's avatar

If this system could pass the medical exam, would you ever want to see a doctor again?

Expand full comment
Rachel Colorado's avatar

Absolutely. But I need a qualification on that. Will I be forced to use the advice that the AI gives me? Or am I completely free to choose my own treatment and get a second opinion?

Expand full comment
Vinu Arumugham #MAHA's avatar

Actually I meant if the bar is so low for the medical exam, can you trust the human doctor ...

Expand full comment
Mofwoofoo's avatar

Could you ask this question: Considering that the government of United States has a long history of false flags, lying to the public, propagandizing the public through the mainstream media, etc., no confidence in the WHO and institutions in general with the degree of corruption that currently exists, sincethe WHO seems to be controlled by Bill gates, including the fact that the "vaccine" was supposed to prevent covid 19, but consequently proved to be ineffective in this regard, in other words, when these entities do not merit trust, would it not be wiser for people to not be forced to take the vaccine, considering also that it has no liability for damage? Considering this is it not tyranny to be forced to have something injected in our bodies by corrupt entities?

Expand full comment
Michael Kowalik's avatar

These are many premises that involve value judgements, and I expect that OpenAI would contest them all before saying anything else.

Expand full comment
Rachel Colorado's avatar

I do not need all those causes to be in effect to merit a compromise and “allow” me, or anyone else, a one time get out of jail free card. I do not think that the argument rests at all on trust. If the opposite of your argument is that: if the government were 100% trustworthy, then we should mandate a countermeasure because the trustworthy government has found the countermeasure to be safe; then my response to that is that there is no level of trust or trustworthiness that can justify a mandate of any countermeasure. Next, if the key word to use in adding ChatGPT if something is “wise” then we have likely misunderstood ChatGPT’s capacity for wisdom and it’s role/purpose/abilities. Next, it doesn’t matter if the gov entity has any “liability” at all because its liability will NEVER be more for the injection of a human body than it is for that individual human body itself--the liability, or if you rather, the consequences, for any countermeasure/injection/treatment/action will always affect that individual human first, more potently/greater/to greater degree than it would the one holding/welding the needle/countermeasure/treatment/action. “Liability” is a legal term that is about monetary compensation to TRY to make up for a harm caused by one’s actions to another one. It is never an ACTUAL complete remedy or complete reversal of the harm (due to the linear nature of time the way humans experience it). Even with lawsuits, punishments, reparations, the human being is not remunerated for consequences to his/her own body. Remuneration must be equal to, and in this case there is no way anyone can return extra LIFE to an injured human body, particularly in retrospect of health-time lost in the past. It is irrecoverable; let alone a future of reduced health and life. Next, by definition an entity is corrupt if it injects anything (any thing) into even one of our bodies (any one of anyone). So, tyranny, yes. “Considering this, is it not tyranny to be forced to have something injected in our bodies by TRUSTWORTHY entities?” See? Given everything I just said, does it make sense that it also doesn’t matter if the entity “merits trust.” You cannot force or cause any person to trust, just because you or any other claims it “merits trust”--that is in the realm of “fact checking.” What we are defending here is the right of a human to do their own fact checking (or not), and regardless of the outcome of said fact checking to make a determination about corrupt/tyranny/safe/unsafe, and to consequently trust (or not) an entity, regardless of any fact of “corrupt/not corrupt”--the individual determines trust level. We hope to increase trust in another by discourse (or propaganda). We will only ever be free people when we allow others to be stupid. We will only ever be free people when we do not allow tyranny to control our minds/trust, because others can control actions and they can control our bodies (as objects) if they choose unethical actions). Hell, they can even control our minds with drugs/chemicals or lying and we can see the myriad tools that exist/in development to do just that. It has always been a battle for the human mind. Destroyer gonna destroy. Jesus on the cross is: you can whip me, burn me, pierce me, starve me, dehydrate me, and kill me, and even causes me to think that God has abandoned me, but in the end death does not win. God wins. The highest law is: Love your neighbor as yourself; this is how you love God. This is how you love yourself and the entire connected whole of life. The battle has always been for the mind; the free human mind; the free will of the human. The core atheist argument is that the human has no free will. The entire understanding of everything is based on free will, without which there is no choice, therefore no love, because love is free will. It all started with a lie, “You will not SURELY die.” Yes. Yes you will. Satan lies.

Expand full comment
Michael Kowalik's avatar

Wise words.

Expand full comment
Rachel Colorado's avatar

Wish i could correct my typos and a couple of word choices.

Expand full comment
Mofwoofoo's avatar

I get your point about governments being worthy of trust is an individual decision and impossible to callibrate. But imagine a horizontal, fully transparent government that seemed incorruptible where there was justice, peace, equanimity, etc. Yes, it's a dream, but it's possible.

As for liability, what about if a person dies who is the breadwinner for a family? Liability would help a lot or a person that needed care 24/7 for the rest of their life, liabillity would be necessary, no? Obviously, it's not gonna help someone who is dead.

I believe that the majority of people who refused the jab did so due to lack of faith in the above mentioned entities. And that seemed to be quite sensible considering history.

Expand full comment
Rachel Colorado's avatar

I understand your point about the liability. For those who choose to take the so-called treatment or any treatment whatsoever, we could  institute that liability. It was specifically legislate it out in the 1980s, and I totally agree that the compensation should be there but if we are talking about coming to the agreement as a society that  choosing to accept or not accept any medical treatment is up to the individual, then we have to talk about the future in future decisions. So to your point, the liability is in fact, a separate issue, but the main point Michael has been making is that we must first start with free choice, and he has made the distinction between free choice and informed choice.

Expand full comment