With AI being incorporated within ever-more complex and critical systems that range from managing our power infrastructure to the diagnosis of life-threatening illnesses, greater regulatory oversight and good governance of the technology is more important than ever. Progressing one step beyond mandated laws and regulations are growing calls for the AI sector to uphold sound ethics for the responsible development of such innovations that can have major impacts — both positive and negative — on our lives, society and the environment. AI professionals likely agree that it’s in all of our best interests uphold virtuous practices in order to earn a positive reputation for our industry as a whole.
To add to this important discussion, we summarize here a handful of examples as to how we at Zetane how upholds sound principles of ethics and responsibility when developing AI technology. Our examples here will not address macro-level issues such as the ethical implications of automation for the global labour force. Instead, we target issues internal to our operations and team, where features of our AI software products that support conducting ethics assessments in machine learning will be of primary focus.
You’ll be hard-pressed to count all the proposed guiding principles for sound ethics in AI and we do plan to address various ethical issues in upcoming articles on our blog. For this article, however, we’ll limit our discussion to the basic principles outlined in the United States Department of Defense AI Ethical Principles document for the responsible adoption and use of new technologies for combat or non-combat contexts: be responsible, equitable, traceable, reliable and governable. We reframe these principles from the perspective of users and members of Zetane as developers of AI solutions. These principles are especially pertinent to our company since our efforts often entail developing industrial AI solutions deployed in operations-critical and safety-critical contexts, which raises similar ethical issues related to the Department of Defense using AI in mission-critical and safety-critical contexts.
[AI professionals] will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment and use of AI capabilities.
Our in-house efforts for ensuring the responsible development of AI begin with our commitment to being accountable for the innovations we commercialize. Project proposals for uses of AI known to increase significant threats to human wellbeing and the environment, for instance, are refused outright. Before committing to any project, we conduct safety and ethics assessments in order to inform our clients of potential shortcomings. While developing a project, we often mandate conducting simulations and prototypes for safety and performance tests prior to deployment regardless if these tests increase the costs and timeline for a project.
[Developers and implementors of AI] will take deliberate steps to minimize unintended bias in AI capabilities.
Identifying unintended biases within reams of abstract computer code is difficult. The visual displays of the internal components of machine learning algorithms and how they process individual data entries within the Zetane Engine enables AI developers and domain experts to identify biases with the human eye. The following case studies concerning past projects at Zetane are exemplary; they demonstrate how AI professionals can use the intuitive visuals of AI made possible by our software in order to audit datasets and machine learning models for biases and potentially harmful outputs. To complement these visual and potentially subjective assessments for bias, we include over a dozen popular “explainable AI (xAI)” tools within the Zetane Engine in order to conduct thorough quantitative assessments for bias in datasets and AI models. Complementing these xAI assessments with other, open-source tools for fairness (equity) assessments, such as the What-If assessment by Google, is straightforward.
Current best practices in machine learning note the need to promote equity and assess for problems related to bias in data and models at every stage of the AI development pipeline. In addition to the aforementioned visual features of our development software, we reduce risks for bias (and thus, resultant problems like discrimination) by promoting diversity and inclusion in our AI development team. Many issues about bias and discrimination do not apply to the majority and can thus be overlooked. This is why our team and in-house practices ensure that women, visible minorities and other equity-seeking population groups will be implicated in any development initiative. Such diversity in perspectives and experience helps identify “blind spots” in equity/bias assessments, where each member of Zetane can focus on assessing AI technology for unique, sometimes obscure, biases that are relevant to their respective communities and identity.
Traceable and Reliable
Traceable: [An organization’s] AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources and design procedures and documentation.
Reliable: [An organization’s] AI capabilities will have explicit, well-defined uses, and the safety, security and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life cycles.
Central to our service offerings are abilities for our customers to gain a profound understanding of the inner workings of AI models. The visual and human-understandable representations of AI made possible by the Zetane Engine and Viewer provide the epitome of transparency and opens unprecedented opportunities to audit AI technologies.
Leveraging rich visuals of machine learning models and datasets to better produce trustworthy AI for industry is a booming area of recent research. We observe that this domain of inquiry is similar to the advent of the microscope as a tool that enabled researchers to explore an uncharted microscopic world. To this end, we made it a priority to offer fellow explorers our forever-free software platform, the Zetane Viewer. We built this software tool using the best open-source industrial standard for representing machine learning algorithms, the Open Neural Network eXchange (ONNX) standard. We encourage AI professionals to use the Viewer to inspect with unprecedented detail the internal workings of pre-trained models used in diverse industrial applications. We aim to start a crowd-sourced effort to find new patterns and strategies within the largely uncharted world of visualized neural networks and datasets so together we can produce ever-more transparent and understandable — and thus, trustworthy — industrial applications of AI.
[An organization] will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behaviour.
Designing 100% automated systems come with risks for catastrophic failure. Thus, standard features of our AI products used in operations-critical applications include “kill switches” and control centres that include “humans-in-the-loop” capacities. These interfaces to control AI technologies help ensure anyone can act to quell a malfunctioning system, as well as ensure continued abilities to detect unintended consequences originating from unfamiliar AI systems.