The Toulmin Method is a technique created to analyze the logic of an argument using 6 different elements: claims, reasons, evidence, warrants, qualifiers, and rebuttals (Green, pg. 318). Essentially a critical thinking framework, the model asks one to evaluate a claim by identifying it, then state the reasoning behind the claim, and then evaluate any evidence offered. By also identifying the beliefs, or warrants, that lead us to accepting the reasoning of an argument we have yet another layer of evaluation. Qualifiers provide a defense against counterpoints that could shoot down an argument by keeping arguments specific and reasonable. One can stop opponents from being able to point out exceptions by heading them off. Finally, making rebuttal statements that show an author's awareness of and answer for certain counterpoints is also an important tool in making stronger arguments.
If you've ever been in a debate or argument with someone and knew they were wrong but weren't sure why, this method could help you think critically about the argument's inner workings. It can also help you form your own arguments in the strongest way possible. But what if we gave this tool to an artificial intelligence? What a powerful tool to be able to analyze scholarly papers, or twitter threads, or news articles without the bias and ignorance that is inescapable for the individual human. Could this tool be used online, with computers and artificial intelligence? It turns out that the answer is yes.
A Conversational Agent is a computer program designed to have conversations with humans. In my brief research for this week's topic I came across a website detailing researchers' efforts to develop a conversational agent to identify wrongness in arguments based on Toulmin's model. The goal for these programs could ultimately be persuading readers or strengthening pre-stated arguments, or they could be educational and teach students what good and bad arguments look like. The former use could have some interesting consequences in this era of "fake-news" awareness. Perhaps such technology could be used to create and reinforce false information as well as finding it out and flagging it. Has this technique already been employed by creators of online bots with agendas to carry out? I think it would be naïve to say "no," but I am not here to offer any such evidence, only to say that well constructed arguments can be used to inform as well as misinform.
For educational purposes, conversational agents also have much to offer. There have been versions of these programs with varying degrees of intelligence for some time now. Around the year 2000, a program called AutoTutor was developed to support college students learning basic computer science information. The program has defined expectations for responses and depending on how a student answers, the AutoTutor decides the appropriate next steps in the conversation.
Imagine with me for a moment, a plug-in for your smart phone that evaluates statements/arguments you make when posting online...What if every time you tried to argue with someone you got a Toulmin warning that your argument isn't sound? What if you could turn on such a tool and it would point out bad arguments in news sites or Twitter threads? What if it could be programmed to identify logical fallacies and bias? The researchers in the article I linked summarized their work by stating that Toulmin's model provides a good base for conversational agents to chose between different paths to take when learning.
However, I was unable to find an example of such an AI in use currently, or a website showing the results of one using the Toulmin method in my brief research this week, but I am almost sure it is out there. I would be so interested to see the results if we unleashed such a program on CNN or FOX News, or on Reddit or Twitter. I think the really interesting thing to think about here is that, with the use of such a tool, we might be faced with the realization that we aren't as smart as we think we are. We are often fooled by bias and those with agendas to sway public opinion, how amazing would it be to see that the arguments we base our personal beliefs on may be incredibly flawed? How much more difficult would it be to lie and misinform? Or how easy would it be to use such technology to make false arguments that fool people? Critical thinking will only become more important as we move into the AI world, both for us and for online intelligences.
Works Cited
Green, Julia. Communicating Online. Available from: VitalSource Bookshelf, McGraw-Hill Create, 2022.
https://doi.org/10.3389/frai.2021.645516
Comments
Post a Comment