Understanding risks associated with AI in travel sector
Global efforts to govern artificial intelligence (AI) technologies have gained momentum, with organisations and governments worldwide striving to create unified standards to navigate the risks and complexities AI poses. AI risk management in travel sector demands proactive governance to mitigate threats while ensuring ethical, responsible, and safe use of this transformative technology.
Sign in to access actionable insights
Global efforts to govern artificial intelligence (AI) technologies have gained momentum, with organisations and governments worldwide striving to create unified standards to navigate the risks and complexities AI poses. AI risk management in travel sector demands proactive governance to mitigate threats while ensuring ethical, responsible, and safe use of this transformative technology.
The global approach to AI governance encompasses international organisations such as the United Nations (UN), the Organization for Economic Cooperation and Development (OECD), and initiatives like the Global Partnership on Artificial Intelligence (GPAI). These entities emphasise the need for inclusive and transparent AI governance frameworks, promoting international collaboration to establish comprehensive AI governance standards.
The UN, for example, through its AI High-Level Advisory Board, is taking concrete steps towards global AI governance. In 2023, the UN Security Council hosted its first meeting on AI, stressing the need for global unity on essential AI principles to minimise risks. Similarly, the OECD has pioneered efforts to define AI governance standards, outlining five key principles: inclusive growth, transparency, accountability, robustness, and human-centered values. These principles were welcomed by the G20 countries, recognising AI’s potential to advance the UN Sustainable Development Goals (SDGs).
Moreover, GPAI brings together experts from diverse sectors, including governments, academia, and civil society, to advance research and practical measures for responsible AI deployment. As AI becomes more integrated into industries like travel, where it optimises services and customer experiences, the need for a universal governance framework becomes evident. AI risk management is critical, especially in travel, where AI-driven systems help manage vast data, optimise operations, and enhance safety. In September 2024, the European Union, the UK, the US, and Israel signed the world’s first ‘legally binding’ treaty on AI and human rights, which commits states to implementing safeguards against various threats posed by the technology to human rights, democracy, and the rule of law.
The travel sector exemplifies the dual role of AI, where it can revolutionise operations but also poses risks, such as data privacy breaches and algorithmic bias. AI governance standards must prioritise ethical considerations, ensuring that AI technologies align with human values and regulatory frameworks. This includes addressing AI's impact on job displacement, security vulnerabilities, and the spread of misinformation, making the travel sector's reliance on AI both an opportunity and a responsibility.
A universal set of AI governance principles is crucial to navigating these challenges, ensuring that AI remains a force for good while mitigating its inherent risks across industries.
This article is based on the WTTC report “Responsible Artificial Intelligence (AI): Overview of AI Risks, Safety & Governance”, published April 2024