Definition of rationality
Definition of rationality
I'm having a little trouble with the definition of rationality, which goes something like: "An agent is rational if it maximizes it's performance measure given its current knowledge."
I've read that a simple reflex agent will not act rationally in a lot of environments. E.g. a simple reflex agent can't act rationally when driving a car as it needs previous perceptions to make correct decisions.
However, if it does its best with the information it's got, wouldn't that be rational behaviour, as the definition contains "given its current knowledge"? Or is it more like: "...given the knowledge it could have had at this point if it had stored all the knowledge it has ever recieved"?
Another question about the definition of rationality: Is a chess engine rational as it picks the best move given the time its allowed to use, or is it not rational as it doesn't actually (always) find the best solution (would need more time to do so)?
You're right. Is that definition out of the Dummy's Guide to Workaholism? or out of ML English Redefined? In a cartoon I saw, the dog decided to stop doing tricks and got its owner to find bones using the reward of fresh mocha express. Rationality is not about performance. It can just as easily lead to the ending of one. The definition would make sense if it began with, "An agent is vulnerability to intellectual manipulation if ..." Maybe that's what we want from AI agents, but we shouldn't call that rationality.
– han_nah_han_
Nov 28 at 0:15
2 Answers
2
I've read that a simple reflex agent will not act rationally in a lot of environments. E.g. a simple reflex agent can't act rationally when driving a car as it needs previous perceptions to make correct decisions.
I wouldn't say that the need for previous perceptions is the reason why a simple reflex agent doesn't act rationally. I'd say the more serious issue with simple reflex agents is that they do not perform long-term planning. I think that is the primary issue that causes them to not always act rationally, and that is also consistent with the definition of rationality you provided. A reflex-based agent typically doesn't involve long-term planning, and that's why it in fact does not often do best given the knowledge it has.
Another question about the definition of rationality: Is a chess engine rational as it picks the best move given the time its allowed to use, or is it not rational as it doesn't actually (always) find the best solution (would need more time to do so)?
An algorithm like minimax in its "purest" formulation (without a limit on search depth) would be rational for games like chess, since it would play optimally. However, that is not feasible in practice, it would take too long to run. In practice, we'll run algorithms with a limit on search depth, to make sure that they stop thinking and pick a move in a reasonable amount of time. Those will not necessarily be rational. This gets back to bounded rationality as described by DukeZhou in his answer.
The story is not really clear if we try to talk about this in terms of "picking the best move given the time it's allowed to use" though, because what is or isn't possible given a certain amount of time depends very much on factors such as:
For example, hypothetically I could say that I implement an algorithm that requires a database of pre-computed optimal solutions, and the algorithm just looks up the solutions in the database and instantly plays the optimal moves. Such an algorithm would be able to truly be rational, even given a highly limited amount of time. It would be difficult to implement in practice because we'd have difficulties constructing such a database in the first place, but the algorithm itself is well-defined. So, you can't really include something like "given the time it's allowed to use" in your definition of rationality.
I try to reinforce the point that we can only know that AlphaGo played more optimally than Lee Sedol in 4/5 games, but that objectively optimal play in Go may be ultimately indeterminable.
– DukeZhou♦
Aug 31 at 19:56
Can I ask, is the idea here re: reflex agents, that the agent has more information than it is utilizing to make the decision?
– DukeZhou♦
Aug 31 at 19:57
@DukeZhou Well I suppose we don't really have a solid definition of what a "reflex agent is". My personal definition would pretty much be... an agent that doesn't involve planning / lookahead search. That could range from very simple heuristic / scripted / rule-based agents, all the way to complex agents that compute their policy through a fixed-architecture trained Neural Network (with the exception being Recurrent Neural Networks, since those in theory might be able to "learn" how to execute a planning algorithm in their black box)
– Dennis Soemers
Aug 31 at 20:00
So, for me a reflex agent is something that directly maps from state to action/policy, without extensively "thinking"/reasoning/planning. I want to say "using a fixed amount of computation", but then a search algorithm with a fixed search depth could also be viewed as a "reflex agent"... so I guess it's difficult to actually get a solid, formal definition there. I guess roughly the "idea" of what I mean is clear though, even if it's difficult to formalize.
– Dennis Soemers
Aug 31 at 20:06
That conforms to my assumption (re: lookahead/planning) and I like your definition. I was recently thinking about asking a question on a formal terminological distinction between agents that only evaluate the present state and agents that look ahead. (Just for fun, I've considered Promethean functions which lookahead vs. Epimethian functions which can only utilize history in the form of the present state. "Forethought" vs. "Hindsight";)
– DukeZhou♦
Aug 31 at 20:10
When we use the term rationality in AI, it tends to conform to the game theory/decision theory definition of rational agent.
In a solved or tractable game, an agent can have perfect rationality. If the game is intractable, rationality is necessarily bounded. (Here, "game" can be taken to mean any problem.)
There is also the issue of imperfect information and incomplete information.
Rationality isn't restricted to objectively optimal decisions but includes subjectively optimal decisions, where the the optimality can only be presumed. (That's why defection is the optimal strategy in 1-shot Prisoner's Dilemma, where the agents don't know the decisionmaking process of the the competitor.)
What may be rational in one environment may not be rational in a different environment. Additionally, what may be locally rational for a simple reflex agent will not appear rational from the perspective of an agent with more knowledge, or a learning agent.
Iterated Dilemmas, where there is communication in the form of prior choices, may provide an analogy. An agent that always defects, even where the competitor has shown willingness to cooperate, may not be regarded as rational because defecting vs. a cooperative agent does not maximize utility. A simple reflex agent wouldn't have the capacity to alter its strategy.
However, rationality used in the most general sense might allow that, to the agent making the decision, if the decision is based on achieving an objective, and the decision is reached utilizing the information available to that agent, the decision is may be regarded as rational, regardless of actual optimality.
The television show Trailer Park Boys has been proposed as an example--the characters make terrible decisions, but the decisions are rational based on the information they have, which is generally highly flawed. This generally leads to sub-optimal outcomes.
– DukeZhou♦
Aug 31 at 19:23
Thanks for contributing an answer to Artificial Intelligence Stack Exchange!
But avoid …
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
But avoid …
To learn more, see our tips on writing great answers.
Required, but never shown
Required, but never shown
By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.
Slight update to my answer. The definition of rationality here is consistent with Russel&Norvig's, where rationality gauged by performance in an environment, and may vary for various types of agents.
– DukeZhou♦
Sep 11 at 17:04