-
The word autonomous is being thrown around these days, often to imply that software is running without human intervention. But it still does not mean software can make decisions outside of the constraints of its own programming. #fauxtonomy Thread 🧵 >>
-
…in reply to @axbom
Software can not learn what it was not programmed to learn. Importantly, it can not arrive at the conclusion: “no, I do not want to do this anymore”. Something that perhaps should be considered a core part of autonomy.
-
…in reply to @axbom
In The Elements of Digital Ethics I referred to autonomous, changing algorithms within the topic of invisible decision-making. axbom.com/elements/
-
…in reply to @axbom
> “The more complex the algorithms become, the harder they are to understand. The more autonomously they are allowed to progress, the further they deviate from human understanding.”
-
…in reply to @axbom
I was called out on this phrasing and am on board with the criticism. Our continued use of the word autonomous is misleading and could itself contribute to harm. qoto.org/@Shamar/108220449702792720
-
…in reply to @axbom
First, it underpins the illusion of thinking machines – something it is important to remind ourselves that we are not close to achieving. Second, it provides makers with an excuse to avoid accountability.
-
…in reply to @axbom
If we contribute to perpetuating the idea of autonomous machines with free will, we contribute to misleading lawmakers and society at large.
-
…in reply to @axbom
More people will believe makers are faultless when the actions of software harm humans, and the duty of enforcing accountability will weaken.
-
…in reply to @axbom
Going forward I will work on shifting vocabulary. For example, I believe ‘faux-tonomy’ (with an added explanation of course) can bring attention to the deceptive nature of autonomy.
-
…in reply to @axbom
When talking about learning I will try to emphasise simulated learning. When talking about behavior I will strive to underscore that it is illusory.
-
…in reply to @axbom
I’m sure you will notice I have not addressed the phrase AI. This is itself an ever-changing concept and used carelessly by creators, media and lawmakers alike. We do best when we manage to avoid it altogether, or are very clear on describing what we mean by it.
-
…in reply to @axbom
The point I make about invisible decision-making still stands, as it is really about the lack of control over algorithms.
-
…in reply to @axbom
This has been clearly exemplified by recent exposure of how Facebook engineers do not even know how data travels vice.com/en/article/akvmke/facebook-doesnt-know-what-it-does-with-your-data-or-where-it-goes?utm_campaign=The%20Edge%20of%20Digital%20Ethics&utm_medium=email&utm_source=Revue%20newsletter, and where, within Facebook.
-
…in reply to @axbom
> “We do not have an adequate level of control and explainability over how our systems use data,” Facebook engineers say in leaked document. I will, however, want to find a way to phrase my explanation of this issue in The Elements of Digital Ethics differently.
-
…in reply to @axbom
Your thoughts on this are appreciated. #fauxtonomy The thread is available as a post here, which I will update when necessary: axbom.com/fauxtonomy/