Assesment list
This feedback will allow for a better understanding of how the assessment list, which is aimed to offer guidance for all AI applications, can be implemented within an organisation. It will also indicate where specific tailoring of the assessment list is needed given AI’s context-specificity.
All interested stakeholders can register to the piloting process and start testing out the assessment list. Feedback will be gathered through an online survey, which will be launched in June 2019. Based on this feedback, the High-Level Expert Group on AI (AI HLEG) will propose a revised version of the assessment list to the Commission in early 2020.
If you support the seven key requirements for Trustworthy AI you can register your interest to participate in the piloting process by clicking on the following button.
In parallel, the AI HLEG will also set up an in-depth review process with a representative set of stakeholders from the private and the public sector to gather more detailed feedback on how the assessment list can be implemented. This qualitative analysis, complementing the quantitative one, will be kicked-off in June.
All feedback received will be evaluated by the end of 2019 and serve as input for a revised assessment list to be finalised in 2020.
It should be noted that the Guidelines do not give any guidance on the first component of Trustworthy AI (lawful AI). They explicitly state that nothing in the document should be interpreted as providing legal advice or guidance concerning how compliance with any applicable existing legal norms and requirements can be achieved. Moreover, nothing in the guidelines or the assessment list shall create legal rights or impose legal obligations towards third parties. The AI HLEG however recalls that it is the duty of any natural or legal person to comply with laws – whether applicable today or adopted in the future. The Guidelines proceed on the assumption that all legal rights and obligations that apply to the processes and activities involved in developing, deploying and using AI systems remain mandatory and must be duly observed.