TicTacToe
TicTacToe
Description
TicTacToe is an environment for evaluating agents on playing Tic-Tac-Toe against an LLM opponent. This environment wraps the TicTacToe implementation from TextArena, a framework for text-based game environments.
Capabilities
- Strategic decision-making in a simple game
- Win/block/fork recognition
- Minimax-style reasoning
- Testing fundamental game-playing capabilities
Compute Requirements
TicTacToe does not require a sandbox. It has minimal compute requirements.
License
MIT.
Tasks
There are two splits: train (50 tasks) and test (50 tasks). Each split contains 50 tasks across each of 1 variants:
- TicTacToe-v0
Each task is seeded for reproducibility.
Reward Structure
This is a sparse reward environment. Rewards are mapped from TextArena's native range of {-1, 0, 1} to {0.0, 0.5, 1.0} via (raw + 1) / 2.
We do not use LLM graders for this environment; reward is determined programmatically.
Data
Game state is generated procedurally by the TextArena engine using seeded randomness. No external data files are required.
Tools
Agents are given a single tool:
place_mark(position): Place your mark on the board at the given position (0-8). 0=top-left, 4=center, 8=bottom-right.
Time Horizon
TicTacToe is a multi-turn environment.
Environment Difficulty
Easy to Medium. Tic-Tac-Toe is a simple game with perfect play leading to draws, but requires recognizing winning opportunities and blocking opponent threats.
Other Environment Requirements
This environment requires an OpenAI API key (passed via secrets) to power the LLM opponent.
Safety
Agents in TicTacToe interact only with a board game and have no access to external systems, the internet, or sensitive data. The environment does not present safety risks.
Citations
@software{textarena2024,
author = {Guertler, Leon and Banting, Wilfried and Pignatelli, Eduardo},
title = {TextArena},
year = {2024},
publisher = {GitHub},
url = {https://github.com/LeonGuertler/TextArena}
}