Code of Responsible AI

Purpose of this code

As AI continues to rapidly evolve and shape our world, ethical concerns must be at the forefront of everything we do as responsible actors in AI development based on Data-Centric AI.

Responsible AI is at the heart of our mission, which is to help global enterprises to identify the sources of errors, noise, and bias in their data to empower them to build fair, accurate AI applications in line with AI regulations.

This Code outlines the core principles that ensure that our product is designed to empower users to make responsible use of AI technology that delivers positive societal outcomes:

Ethical AI Principles

  1. Representative and high-quality data: The use of representative and high-quality data is critical to the development of ethical AI applications. Our AI system relies on accurate and reliable data inputs to make decisions, which can have a significant impact on the outcomes of our technology. Therefore, we are committed to using data that accurately represents the context of the AI application by empowering users to identify sources of error, noise, and bias in data.
  2. Fairness: Fairness is a fundamental value that is embedded in all our AI methods and processes. We promote fairness by designing methodologies that enable our customers to build AI applications that are impartial and equitable in their treatment of all data subjects.
  3. Accuracy: Accuracy is essential to ensure the reliability and effectiveness of our product. Therefore, we are committed to designing and developing AI processes and methodologies that consider accuracy in all aspects of our work. 
  4. Avoidance and mitigation of bias: We recognize that bias can manifest in different ways throughout the AI lifecycle. This includes addressing systemic biases that may exist in our datasets as well as computational and statistical biases that may arise from non-representative samples. We are committed to designing and developing AI processes and methodologies that enable users to identify bias. 
  5. Non-discrimination: We design and develop AI processes and methodologies in a manner that promotes non-discrimination. In the design and development of our product, we are committed to ensuring that AI applications do not negatively discriminate against individuals or groups on the basis of any grounds, including but not limited to sex, race, color, ethnic or social origin, language, religion, or any other personal characteristics.

We understand that Responsible AI is a fast-developing field that requires continuous learning, iteration, and adaptation. We are committed to regularly reviewing and improving our approach to ensure that our work aligns with these principles as well as with the latest regulatory developments. We recognize that as AI evolves, so too will the challenges we face, and we are committed to staying engaged with the broader AI community to promote the responsible development and deployment of AI.