On Artificial Intelligence, trust is a must, not a nice to have.
Artificial intelligence (AI) has rapidly become a transformative technology that impacts various aspects of our daily lives. From virtual assistants to self-driving cars, AI has the potential to revolutionize the way we live and work. However, as AI continues to evolve, it raises concerns about ethical, legal, and social implications that need to be addressed.
“On Artificial Intelligence, trust is a must, not a nice to have.” Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age.
Elon Musk, the billionaire entrepreneur and CEO of Tesla and SpaceX, has warned of the potential for “civilization destruction” if artificial intelligence (AI) is not regulated properly. Despite investing in AI, including as a founding member of OpenAI, Musk has been vocal about the dangers of AI and called for a pause in the “out of control” race for AI development. In an interview with Tucker Carlson (CNN), Musk said that government regulation is necessary, and proposed that a regulatory agency should seek insight from industry before proposing rules. Musk is reportedly working on a new venture, X.AI, to rival AI offerings from tech giants Microsoft and Google, and is also said to be building a team of AI researchers and engineers.
In this blog article, we will explore the need for a responsible approach toward AI and its impact. We will delve into the latest AI regulations, such as the EU’s world’s first legislation on AI and BEUC Director General Ursula Pachl’s thoughts on AI. Ultimately, we will look into Metaroom’s approach to responsible AI.
Understanding AI and it’s impacts
Data-driven technologies have a significant impact on data collection and use, as they are designed to learn and improve through analyzing vast amounts of data. This leads to privacy and data protection concerns, as personal data collected by AI systems continues to grow. Lack of transparency in how the data is being used can lead to distrust of AI systems and a feeling of unease. To address these concerns, organizations, and companies must take proactive measures to protect individuals’ privacy by implementing robust data security protocols and designing AI systems that adhere to ethical principles. Transparency in the use of personal data by AI systems is also crucial, as individuals must be able to understand how their data is being used and can control the use of their data, including the ability to opt out of data collection and request that their data be deleted. By doing so, we can build a future where AI technologies benefit society while protecting individuals’ privacy and data protection.
EU’s AI Act
The European Union’s proposed legal framework, the Artificial Intelligence Act (AI Act), seeks to promote the responsible development and deployment of AI while respecting human rights and EU values. This comprehensive regulatory approach is designed to address the challenges posed by AI, promote innovation, and ensure consumer protection.
The AI Act takes a comprehensive approach to regulate AI technologies, starting with the classification of AI systems into different risk categories, from minimal to high risk. Based on the risk level, appropriate regulatory compliance is required, and high-risk AI systems are subject to stricter rules, including mandatory conformity assessments.
To ensure transparency and accountability, the AI Act mandates that developers provide clear information about the system’s capabilities, limitations, and potential risks. Moreover, AI systems must be designed with human oversight and accountability measures in place.
Certain AI practices that pose unacceptable risks to human rights are prohibited by the AI Act, such as social scoring systems or real-time biometric identification in public spaces, with specific exceptions. In line with the AI Act’s emphasis on unbiased and non-discriminatory outcomes, developers are obliged to document and trace data sources used in AI systems, ensuring the importance of high-quality training data.
The AI Act proposes the establishment of the European Artificial Intelligence Board (EAIB), which is responsible for providing guidance, sharing best practices, and ensuring the consistent application of the regulation across EU member states. In cases of non-compliance, substantial fines reaching up to 6% of a company’s annual global turnover for violations related to high-risk AI systems are included in the AI Act.
The legal framework is a significant step forward in regulating AI within the European Union. By adopting a risk-based approach, enforcing transparency and accountability, and establishing a governance structure, the EU aims to ensure that AI technologies are developed and deployed responsibly, protecting consumers and upholding EU values.
Call for investigations into ChatGPT and similar chatbots
As AI continues to advance and impact our daily lives, concerns about its potential harm to consumers are becoming increasingly urgent.
“For all the benefits AI can bring to our society, we are currently not protected enough from the harm it can cause people.” Ursula Pachl, Deputy Director General of the European Consumer Organisation (BEUC)
The European Consumer Organisation (BEUC) has recently called for an investigation into ChatGPT and similar chatbots due to their potential risks. BEUC is concerned that the EU’s AI Act will take too long to take effect, leaving consumers vulnerable to unregulated AI technology. Ursula Pachl, Deputy Director General of BEUC, has called for EU and national authorities to reassert control over these AI systems and subject them to greater public scrutiny. Furthermore, a complaint was filed against ChatGPT-4 by the US-based civil society group CAIDP with the US Federal Trade Commission.
Metaroom’s Approach to achieve a Trustworthy AI
Trustworthy AI has been a major concern since the beginning of Metaroom’s development. To ensure that our AI technologies are developed responsibly, we strongly adhere to the seven key requirements for trustworthy AI proposed by the European Commission in their Ethics Guidelines for Trustworthy AI:
(1) Human agency and oversight
(2) Technical Robustness and safety
(3) Privacy and data governance
(5) Diversity, non-discrimination, and fairness
(6) Societal and environmental well-being
Each of these requirements plays an important role in ensuring that AI is developed and used in a way that benefits society while upholding ethical and legal standards. Now let’s take a closer look at how Metaroom has applied the requirements.
From the very beginning, we implemented fully automated streamlined testing pipelines to prove the robustness and accuracy of the developed methods (2). This allows us to continuously improve the individual parts of the whole pipeline while keeping the risk of negative side effects minimal. For example, we immediately notice if improvements for certain types of rooms might affect other room types (5).
In designing our application, the user plays a vital role (5,6). Efficient human-computer interaction with our app not only facilitates the interaction from the user’s perspective but simultaneously helps to increase the quality of the acquired data, in turn leading to superior final reconstruction outcomes (2).
To achieve optimum performance and robustness, we combine cutting-edge data-driven methods with conventional computer vision technology. Since we know exactly about the behavior of the handcrafted computer vision methods, we can thereby increase the degree of explainability and transparency compared to solely data-driven solutions (4). In addition, room geometry is reconstructed based on physical depth data obtained with a dedicated depth sensor. This allows us to omit any error-prone depth estimation steps (2). Since even the best sensors show slight deviations, the users are informed about the expected accuracy of the reconstructed models (7). To ensure maximum human autonomy, the user can finally view, review, and adjust the reconstructed room model (1).
To implement the defined requirements for trustworthy AI throughout the entire life cycle of Metaroom’s AI-based solution, both technical and non-technical processes are relied upon. Metaroom’s active participation in the Joint Focus Group on Artificial Intelligence of the European DIGITAL SME Alliance and the Joint Research Centers of the European Commission underscores our efforts in this area.
In conclusion, the use of AI has the potential to revolutionize the way we live and work, but it also raises concerns about ethical, legal, and social implications that need to be addressed. The need for responsible AI development is paramount, as illustrated by the EU’s proposed legal framework, the Artificial Intelligence Act (AI Act), which promotes the responsible development and deployment of AI while respecting human rights and EU values.
However, there are still concerns about unregulated AI technology, as highlighted by the European Consumer Organisation (BEUC), which has called for greater regulation to protect consumers from harm caused by these technologies. As AI technology continues to evolve, organizations and companies must take proactive measures to ensure that AI technologies are developed responsibly, protecting consumers and upholding ethical and legal standards. Metaroom’s approach to trustworthy AI, based on the EU Ethics Guidelines and proposed standards, demonstrates the importance of prioritizing ethical principles in AI development and deployment. It is crucial that we continue to work towards building a future where AI technologies benefit society while protecting individuals’ privacy and data protection.
Ready to experience trustworthy AI solutions for your business or project?
Contact Metaroom® today to learn more about our cutting-edge technology and how we can help you achieve your goals with responsible and reliable AI.
Let’s work together to build a better future powered by AI.