Understanding Human-AI Communication Ambiguities: Insights from a Midair Collision
In the rapidly evolving world of artificial intelligence, the interaction between humans and AI systems is becoming increasingly complex. This complexity is particularly evident in the use of generative AI and large language models (LLMs), where communication ambiguities can lead to significant consequences. A recent incident involving a mid-air collision between a military helicopter and a commercial plane has highlighted the critical nature of communication ambiguities, not only in human-to-human interactions but also in human-to-AI exchanges.
This article delves into the intricacies of communication ambiguities, drawing insights from the aforementioned aviation incident and exploring their implications in the realm of AI. The analysis is part of an ongoing series that examines the latest developments in AI, focusing on the complexities and challenges that arise in its application.
Ambiguity in Communications: A Case Study
The recent mid-air collision in Washington D.C., involving a military helicopter and a passenger plane, has brought the issue of communication ambiguities to the forefront. The National Transportation Safety Board (NTSB) is currently investigating the crash, and while the final report is pending, preliminary audio from air traffic control (ATC) communications has been released. This audio has sparked discussions about the potential role of communication ambiguities in the incident.
In the audio, the military helicopter is designated as “PAT-25,” while the passenger plane is referred to as “CRJ,” a generic term for Canadair Regional Jets. This non-specific designation is a source of ambiguity, as there were multiple CRJs in the vicinity at the time of the incident. The following is a transcription of a portion of the audio:
- ATC Controller: “PAT-25, do you have the CRJ in sight?”
- ATC Controller: “PAT-25, pass behind the CRJ.”
- Helicopter pilot: “PAT-25 has aircraft in sight. Request visual separation.”
Upon examining this exchange, it becomes apparent that both the controller and the pilot may have been operating under different assumptions. The controller’s reference to “CRJ” was ambiguous, given the presence of multiple CRJs. Similarly, the helicopter pilot’s confirmation of having an aircraft in sight was equally ambiguous, as it was unclear which CRJ was being referred to.
This miscommunication could have led to the tragic collision, as the helicopter pilot might have been referring to a different CRJ, one that was perceived to be at a safe distance. The NTSB’s final report will likely shed more light on the significance of this ambiguity.
Controller-Pilot Communication Ambiguities
The issue of communication ambiguities between controllers and pilots is not new. It is a well-documented problem that arises when either party believes they are being specific, while in reality, their communication is ambiguous. Often, one party will recognize the ambiguity and seek clarification, but this is not always the case.
In high-pressure environments like aviation, where both pilots and controllers are often overloaded with information, ambiguities are bound to occur. The assumption that both parties are fully aware of each other’s intentions can lead to dangerous misunderstandings, especially in critical situations.
Human-to-Human vs. Human-to-AI Communication
While the incident described involves human-to-human communication, similar ambiguities can occur in human-to-AI interactions. As generative AI becomes more prevalent, users are increasingly relying on these systems for a variety of tasks, some of which are high-risk and life-dependent. In such scenarios, communication ambiguities can have serious consequences.
Two critical questions arise in the context of human-to-AI communication:
- Human awareness of ambiguities: Are users aware of potential ambiguities in their interactions with AI, or do they inadvertently overlook them?
- AI design and ambiguities: Are AI developers considering the need to address potential ambiguities in their systems?
In the rush to deploy the latest generative AI technologies, there is a risk that both users and developers may not prioritize the issue of ambiguities, leading to undesirable outcomes.
Examples of Generative AI Usage Ambiguities
To illustrate how ambiguities can arise in human-to-AI communication, consider the following examples using a popular generative AI application, ChatGPT. These scenarios demonstrate different types of ambiguities and their potential implications:
Human is Ambiguous, AI Seeks Clarification
In this scenario, the user provides an ambiguous prompt, and the AI seeks clarification:
- User prompt: “Can you help me with my report?”
- AI response: “Of course! Could you specify what kind of help you need? Are you looking for research, editing, formatting, or something else?”
- User prompt: “Turns out that I specifically need help summarizing the key points.”
Here, the AI successfully identifies the ambiguity and requests further information, preventing potential misunderstandings.
AI is Ambiguous, Human Seeks Clarification
In this example, the AI provides an ambiguous response, prompting the user to seek clarification:
- User prompt: “What’s the best investment for me?”
- AI response: “Real estate could be a great option.”
- User prompt: “What do you mean by ‘great option’? Are you basing this on market trends, my risk tolerance, or something else?”
- AI response: “Good question! I based that answer on general market trends. Would you like me to factor in your risk tolerance and financial goals?”
The user’s request for clarification helps ensure that the AI’s recommendation is relevant and tailored to their specific needs.
Human is Ambiguous, AI is Ambiguous
This scenario represents a double ambiguity, where both the user and the AI are unclear:
- User prompt: “Tell me about the car that I am considering buying.”
- AI response: “The car is a good choice for you.”
- User prompt: “Thanks, I will go ahead and buy the car.”
- AI response: “Great, let me know if you need any assistance in doing so and I can bring up the details and pricing of the car.”
In this case, both parties are operating under different assumptions, leading to a potential miscommunication.
Human is Clearcut, AI is Clearcut
Finally, an example of clear communication between the user and the AI:
- User prompt: “Show me the two key bullet points from my meeting notes, titled ‘Marketing Strategy’, which I uploaded into my Shared AI folder.”
- AI response: “Based on the meeting notes entitled ‘Marketing Strategy’ that I found posted in your Shared AI folder, here are the two key points identified: (1) Define your marketing goals, and (2) Specify tangible marketing metrics associated with each of the goals.”
- User prompt: “This is great! Could you expand on the first key point regarding defining marketing goals and give me a full sentence articulating the topic?”
- AI response: “Absolutely! A more detailed indication about defining marketing goals is this draft sentence: Marketing professionals must carefully specify in writing the requisite marketing goals that are associated with an overall marketing strategy that they are planning to undertake.”
This interaction demonstrates effective communication, with both parties clearly understanding each other’s intentions.
Insights on Human-AI Communication
Natural language, by its very nature, is semantically ambiguous. Words can be interpreted in multiple ways, leading to potential misunderstandings. In human-to-human communication, participants often navigate these ambiguities, but the stakes are higher when interacting with AI, especially in time-sensitive and high-risk situations.
One perspective is that the responsibility for managing ambiguities should lie with the AI. Developers should ensure that AI systems are designed to detect and address ambiguities, seeking clarification when necessary. However, users also play a crucial role in this process. By actively seeking clarification and providing clear instructions, users can help mitigate the risks associated with ambiguous communication.
There is a growing call for regulations or laws that require AI systems to handle ambiguities effectively. Such measures could help ensure that AI technologies are safe and reliable, particularly in critical applications.
In conclusion, it is essential to remain vigilant when interacting with AI. Human-to-AI communication is subject to the same ambiguities as human-to-human interactions. By staying alert and seeking clarification when needed, users can help ensure that AI systems operate as intended, minimizing the risk of misunderstandings.
As Adam Smith wisely noted: “On the road from the City of Skepticism, I had to pass through the Valley of Ambiguity.” This journey is one that we all must navigate, especially in the age of AI.
Originally Written by: Lance Eliot