Skip to content

Talking like the Robot HK-17

May 14, 2017

Back in freshmen year, I began designing an AI system that could model the exchange of information between agents with imperfect information about the world around them. I sought to define what actions we accomplish by using speech. At the time, I was only able to come up with “queries” and “statements”. The idea that language only accomplishes the statement of fact and the questioning of facts is too miss out on many important speech acts.

meatbags

The robot HK17 pictured here announces the type of speech act of the sentence before speaking.

Now that I have studied linguistics at university, I can define the parts of language I was attempting to model more accurately. Answering questions like “what does this sentence accomplish” and “under what context does this sentence make sense” fall directly under the category of linguistics called Pragmatics. Pragmatics deals with understanding what speech accomplishes, what is implied and assumed in conversation, what is normally said in response to speech acts, and what is abnormal in conversation.

Using these speech acts alone actors in an AI system can improve their knowledge about their surroundings and improve the situation of their surroundings:

  • Questions [a person P about component C of object O]
  • Assertions [value V of component C of object O]
  • Requests/Commands [a person P to do action A with object O]
  • Promises/Threats [to person P that you will do action A]

Imagine a situation with two people named Joe and Paul who are in charge of managing a furnace conversing about their job.

Joe Question: Quantity[C] coal[O]?

Paul Assertion: Quantity[C] coal[O] 6[V].

Joe Request: Quantity[C] coal[O] 7[V].

Paul Promise: Quantity[C] coal[O] 7[V].

Paul gets 7 coal…

Joe Question: Quantity[C] coal[O]?

Paul Assertion: Quantity[C] coal[O] 13[V].

While this extreme language abstraction seems hard to parse, I feel this will allow for a more human method of solving a ‘real world’ problem with ai. At the same time, programming the context of the situation would be very difficult.

There is a concept called the felicity of a statement. Felicitous statements make sense in their context while infelicitous statements don’t. An infelicitous response to a question is an unrelated answer, for example, answering “the Stealers” when someone says “pass the salt” is an infelicitous response. The AI would know in general to answer questions with assertions, but knowing to request something would depend on priorities given to the ai. The ai would have to know that the furnace needs 13 coal and would need to know to request more.

Advertisements

From → Uncategorized

Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: