Now that Hub 4 has been released, it’s time to introduce the Nextcloud Ethical AI Rating.
Progressively, there are more and more risks associated with computer intelligence, and as a transparent software company we have the responsibility to intervene and protect our users.
Recently, Microsoft laid off its entire ethics and society team, the team that taught employees how to make AI tools responsibly. Nextcloud on the other hand, embraces the ethics and challenges that make up today’s AI and aim to take them head on.
Challenges
The field of AI is moving fast, and many of the new capabilities face ethical and even legal challenges.
For example, there are problems with:
Privacy and security of user data
Discrimination and biases
Energy usage
In particular with neural networks based AI technologies, the mere availability of open source code is no longer enough to be able to say you are in control over your data or that the software is safe or ethical. The set of data and the software used in the training, as well as the availability of the final model are all factors that determine the amount of freedom and control a user has.
Ethical AI standards
Until Hub 3, we succeeded in offering features like related resources, recommended files, our priority inbox and even face and object recognition without reliance on proprietary blobs or third party servers.
Yet, while there is a large community developing ethical, safe and privacy-respecting technologies, there are many other relevant technologies users might want to use.
With Hub 4, we want to provide users these cutting-edge technologies – but also be transparent. For some use cases, ChatGPT might be a reasonable solution, while for other data, it is paramount to have a local, on-prem, open solution. To differentiate these, we developed an Ethical AI Rating.
Ethical AI rating rules
Our Ethical AI Rating is designed to give a quick insight into the ethical implications of a particular integration of AI in Nextcloud. We of course still encourage users to look more deeply into the specific solution they use, but hope that this way we simplify the choice for the majority of our users and customers.
The rating has four levels:
Red 🔴
Orange 🟠
Yellow 🟡
Green 🟢
And is based on points from these factors:
✅
Is the software (both for inferencing and training) open source?
✅
Is the trained model freely available for self-hosting?
✅
Is the training data available and free to use?
If all of these points are met, we give it a Green 🟢 label. If none are met, it is Red 🔴. If 1 condition is met, it is Orange 🟠 and if 2 conditions are met, Yellow 🟡.
We add one additional note to the rating: bias. As it is impractical to prove there is no bias, we merely point out if, at time of our last check, major biases (like discrimination on race or gender for a face recognition technology for example) were discovered in the data set or in the expression of the model.
There are other ethical considerations for AI, of course. There are legal challenges around the used data sets (in particular, copyright) and the energy usage of especially deep neural networks is of great concern. However, unfortunately, those concerns are extremely hard to quantify in an objective manner and while we intend to try to warn users of any open issues, we can not (yet) include them in our rating.
For that reason, we recommend users to investigate for themselves what the consequences of the use of AI are for their individual case using the Nextcloud Ethical AI Rating.
Unsere neueste Version fördert eine gesunde Meeting-Kultur, führt den Nextcloud-Assistenten ein und überzeugt durch benutzerzentriertes Design, Transparenz und Benutzerkontrolle. Ein herzliches Dankeschön an unsere Community! 💙 Nextcloud wäre nicht dasselbe ohne unsere engagierte, begeisterungsfähige Community. Ein großes Dankeschön an die Tausende von Community-Mitgliedern, die diese Version zur besten machen, und die ihre Talente und ihre […]
Our mission is to help individuals, businesses and organizations achieve digital sovereignty and regain control over their data. Nextcloud Hub 5 marks a massive step forward towards achieving this mission, putting the power of AI into your hands – in a way that keeps you in control. New release, new possibilities Hub 5 builds on […]
As part of Schleswig-Holstein's state digitization strategy, the state chancellery has announced they will work with Nextcloud to develop AI for working with government documents. This comes just after we announced the first private AI assistant last weekend with Hub 6. The German state already uses Nextcloud and their AI strategy aligns with our work on ethical, local AI technologies.
Minor Nextcloud updates are released!
As always, minor releases include stability and security improvements that are designed to be a safe and quick upgrade.
Nextcloud is on an international tour spreading the word of Nextcloud Hub 6 this October! Go to any of the following events to: Whether you’re in Germany, Italy, Latvia, Singapore or the UAE, we’re ready to share the latest of what Nextcloud has to offer nearest you. Open Source Week (OSW23) Oct. 3-5 – Rome, […]
Wir speichern einige Cookies, um Besucher zu zählen und die Nutzung der Website zu erleichtern. Diese verlassen unseren Server nicht und dienen nicht der Verfolgung Ihrer online-Aktivitäten.
Weitere Informationen hierzu finden Sie in unserer Datenschutzrichtlinie. Anpassen