Understanding the limitations of AI is extremely important both scientifically and ethically. "Bias in an algorithm is defined as a systematic error in its outputs or processes." (Vicente & Matute, 2023) The critical appraisal of the bias and correctness of the results of using AI is essential. In terms of results you receive, consider of your results. Below you will find an example framework for evaluating the limitations and biases of AI tools.
The following framework is meant to guide users in critically assessing the use and output of AI tools.
Reliability | Does the responsible creator have a stake in the information that is produced? Do they have the proper credentials/transparency to reliably provide results? |
Objective | Why was the AI created and why are they sharing information about it? |
Bias | Are there biases in the machine language model being used or the data it is training on? Have any ethical issues been raised about this product? |
Ownership | Who owns this? Private company, government or academic institution? |
Type | Which machine language model is being used and how is it being trained? Does it rely on human intervention? |
Source: Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry.