Challenges for Artificial Intelligence

Download pdf version

Disaster Relief Problem

A target of the agent competition is an inland earthquake in urban area. In agent competition, We have to consider the following conditions.


When an inland earthquake in urban area occured, the simulation starts.
  • Fires start from ignition points. Buildings which are located in igni- tion points burned and fires expand. Burned buildings burn stronger with passage time. At last, the buildings burned out.
  • Road blockages occur according to collapse buildings.
  • Civilians are put initially in various locations. After a simulation started, they go to the nearest refuges with the shortest path, respectively.
  • Many civilians lie buried according to collapse buildings. The civilians receive damage with passage of time. At last, the civilians die.


Three types (platoon) agents: firebrigades, police forces and ambu- lance teams, are located on the map. The number of the agents is different for each map.
  • The firebrigades can extinguish fires.
  • The police forces can clear blockages.
  • The ambulance teams can help buried civilians. Besides they can ambulance the damaged civilians.
  • They can share effective information using radio communications, that is a global communications. But, the message length and the number of messages are ristricted. Besides, the radio communication
These detailed condition is defined as a senario. These problems provided as senarios are called as the disaster relief problem.
Participants prepare an effective strategy for various senarios. In other words, participants solve the complex problems to prepare their agents’ strategy.
The scoring is based on the number of victims saved on time and the number of remaining buildings with various levels of fire damage.

Previous works

There are many multi-agent research problems that can be investigated using the RoboCupRescue simulation package[1]. Researchers can choose which aspects of the system they are interested in investigating. Researchers worked for the following aspects in previous works.

Task allocation with uncertainty

A core part of the standard scenario is allocating tasks to multi-agents. At any point in time there will be a number of res, injured or buried civilians and blocked roads. The agents will know about some subset of these tasks but probably not all. Decisions must therefore be made about whether to search for new tasks, and how to allocate tasks given that new tasks may appear at any time.

Coalition formation

Coalitions of agents are generally required for efficient allocation in the earthquake domain. Civilians trapped in building rubble will generally require the combined efforts of several ambulances to be rescued before they die of their injuries.


The earthquake scenario generally requires different types of agents to cooperate. For example, the roads leading to an injured civilian or the hospital may be blocked and must be cleared by police before an ambulance can get to the target. Similarly, decisions about which res to extinguish rst may depend on the presence of nearby injured civilians and ambulances.

Distributed vs centralized control

Because communication is limited there will be a trade-off between centralized and distributed control. A centralized controller may have a more complete picture of the whole situation, but with unreliable communication it may not be able to send commands to remote agents.


With the radio channel model of communication it is pos- sible for agents to choose their own communication structure, possibly even changing it on they. Researchers can also implement their own communication models if desired.

With respect to previous works for these aspects, Prof. Akin described as follows in AI Magazine[2].
The overall goal of developing robust software systems that are capable of efficiently coordinating large agent teams for USAR (Urban Search and Rescue) raises several research challenges such as the exploration of large scale environments in order to search for survivors, the scheduling and planning of time-critical rescue missions, coalition formation among agents, and the assignment of agents and coalitions to dynamically changing tasks. In the target domain, these issue are even more challenging due to the restricted communication bandwidth. Moreover, the simulated environment is highly dynamic and only partially observable by the agents. Under these circumstances, the agents have to plan and decide their actions asynchronously in real-time while taking into account the long-term effects of their actions.
Over the years the winning entries in the competition showed a strong fo- cus on highly optimized computations for multi-agent planning and model-based prediction of the outcome of the ongoing incidents. Several techniques for multi- agent strategy planning and team coordination in dynamic domains have also been developed based on the rescue simulator. Task allocation problems inher- ently found in this domain were rst described in [3]. The solutions developed for the distributed constraint optimization problems (DCOPs) encountered during the simulations were evaluated in [4]. The coalition formation with spatial and temporal constraints (CFST) model and the state-of-the-art DCOP algorithms used for solving CFSTs were given in [5]. The problem of scheduling rescue mis- sions were identi ed and described together with a real-time executable solution based on genetic algorithms in [6].
Furthermore, there has been substantial work on building information infrastructure and decision support systems for enabling incident commanders to efficiently coordinate rescue teams in the eld. For example, Schurr et al. introduced a system based on software developed in the rescue competitions for the training and support of incident commanders in Los Angeles [7].
Rescue simulation league aims to ease the entry for newcomer teams. However, the rich set of problems that need to be addressed by the agent teams competing in the competition and the degree of sophistication of current successful solutions makes this a non-trivial task. RMasBench [8] is a challenge recently introduced for this purpose. Certain aspects such as task allocation, team formation, and route planning are extracted from the entire problem addressed by the agents and are presented as sub-problems in stand-alone scenarios with an abstract interface. Consequently, the participating teams are able to focus more on the topics relevant to their own research rather than dealing with all of the low-level issues. Currently, RMasBench has a generic API for DCOP algorithms together with reference implementations of state-of-the-art solvers, such as DSA [9] and MaxSum [10].
In order to facilitate code sharing among the participating teams, a novel protocol for inter-agent communication is currently being developed in rescue simulation league to be integrated with the official release next year. This new addition will increase the modularity of agent solutions and will also allow combination of different agent solutions within a single team.

Current Challenges of Agent Competition

The following challenges of agent competition are currently presented by technical committees of agent competition.
  •  Multi-task allocation
  •  Group formation
  •  Information search and sharing
  •  Path Planning
These problems have been studied so far. And the results have been opened.
as source codes on the web[11]. However, it was difficult to re-use the results, because the codes were too complicated to re-use.
To solve this problem, we presented a corresponding algorithm module for each problem using the ADF (Agent Development Framework) for rescue simulation package.
You can nd how to use the ADF on this web. Please check it.


[1] Skinner, C.; Ramchurn, S. 2010. The RoboCup rescue simulation platform. In 9th International Conference on Autonomous Agents and Multiagent Systems, AAMAS2010, Toronto, Canada, May 10-14, 2010, Volume 1-3, pp.1647{1648 (2010)
[2] Akin, H. L.; Nobuhiro, I.; Jacoff, A.; Kleiner, A.; Pellenz, J.; Visser, A. 2013. RoboCup Rescue Robot and Simulation Leagues. In AI Magazine, Vol.34, No.1, pp.78{86 (2013)
[3] Nair, R.; Ito, T.; Tambe, M.; and Marsella, S. 2002. Task Allocation in the RoboCup Rescue Simulation Domain: A Short Note. In RoboCup 2001: Robot Soccer World Cup V, 751{754.
[4] Ferreira, P.; Dos Santos, F.; Bazzan, A.; Epstein, D.; and Waskow, S. 2010. RoboCup RescueAs Multiagent Task Allocation Among Teams: Experiments With Task Interdependencies . Autonomous Agents and MultiAgent Systems 20(3):421{443.
[5] Ramchurn, S.; Farinelli, A.; Macarthur, K.; and Jennings, N. 2010. Decentralized coordination in RoboCup Rescue. The Computer Journal 53(9):1447{1461.
[6] Kleiner, A.; Brenner, M.; Brauer, T.; Dornhege, C.; Gobelbecker, M.; Luber, M.; Prediger, J.; Stuckler, J.; and Nebel, B. 2005. Successful Search and Rescue in Simulated Disaster Areas. In Bredenfeld, A.; Jacoff, A.; Noda, I.; and Takahashi, Y., eds., Robocup 2005: Robot Soccer World Cup IX, volume 4020 of Lecture Notes in Computer Science, 323{334. Springer.
[7] Schurr, N., and Tambe, M. 2008. Using Multi-Agent Teams to Improve the Training of Incident Commanders. Defence Industry Applications of Autonomous Agents and Multi-Agent Systems 151{166.
[8] Kleiner, A.; Dornhege, C.; and Hertle, A. 2011. RMasBench - Rescue Multi-Agent Benchmarking. Website. Available online at http://kaspar.; visited on 10/2012.
[9] Fitzpatrick, S., and Meetrens, L. 2003. Distributed Sensor Networks A Multiagent Perspective. Kluwer Academic. chapter Distributed Coordination through Anarchic Optimization, 257{293.
[10] Farinelli, A.; Rogers, A.; Petcu, A.; and Jennings, N. R. 2008. Decentralised Coordination of Low-Power Embedded Devices Using the Max- Sum Algorithm. In Proceedings of the Seventh International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2008), 639{646.
[11] RoboCup Rescue Simulation