The goal of achieving what is called http://www.scholarpedia.org/article/Artificial_General_Intelligence — or the capacity of an engineered system to display human-like general intelligence — is still some time off into the future. Think of babies that quickly learn about the laws of physics by constantly manipulating or dropping objects to see what happens.

Despite recent interest in machine agents that reason about other agents, it is not clear if such agents learn or hold the core psychology principles that drive human reasoning.” To better evaluate how machines reason, the team of researchers created a benchmark called Action-Goal-Efficiency-coNstraint-uTility, or https://arxiv.org/pdf/2102.12321v4.pdf for short.

Though the test is still being improved, the team believes that AGENT could be a helpful diagnostic tool for evaluating and further advancing common sense in AI systems.

Related Articles