Given AI’s potential for misuse, how do we develop and deploy algorithmic systems responsibly?
Increasingly, AI systems are being deployed in contexts where safety risks can have widespread consequences, including medicine, finance, transportation, and social media. This makes anticipating and mitigating such risks — in both the near and long term — an urgent societal need.
At FuturismAI, we work on designing and developing best practices that can help us avert likely accidents, misuses, and unintended consequences of AI technologies. We don’t have to wait for such incidents to arise. We believe that precaution can be taken as early as the research stage to ensure the development of safe AI systems.
As AI systems are deployed across an ever-growing number of domains, the fairness, transparency, and accountability of these systems has become a critical societal concern. This Program examines the intersections between AI and some of humanity’s most fundamental values, addressing urgent questions about algorithmic equity, explainability, responsibility, and inclusion.
Through original research and multistakeholder partnerships, our work and R&D asks how AI can build a world that is more (and not less) just than the one that came before it.
Mature industrial sectors (e.g., aviation) collect their real world failures in incident databases to inform safety improvements. Intelligent systems currently cause real world harms without a collective memory of their failings. As a result, companies repeatedly make the same mistakes in the design, development, and deployment of intelligent systems. A collection of intelligent system failures experienced in the real world (i.e., incidents) is needed to ensure intelligent systems benefit people and society. We're working on a project called "TrackAI", where we want to build the "The AI Incident Database" with the goal to to enable AI incident avoidance and mitigation. The database will supports a variety of research and development use cases with faceted and full text search on more than 1,000 incident reports archived to date.