Nextcloud Hub 9: Be connected
Mit Nextcloud Hub 9 bleiben Sie vernetzt. Neue Federation-Funktionen, Automatisierung von Workflows, neues Design und vieles mehr!
Mehr lesenAI has a lot of potential — to let us do things better and faster, but also to cause a great harm to our privacy, creativity and perhaps even our mental well-being. We believe most of it is yet to be discovered, but today we need to figure out how to make it work for us, and not against us (or others).
And this is why we need AI ethics. At Nextcloud, we care about privacy and transparency, and believe that Ethical use of AI tools in both commercial and personal setting is essential.
In this article, we delve into five major challenges confronting organizations in their quest for ethical AI adoption: issues with major providers of AI tools, transparency concerns, regulatory compliance, data sovereignty challenges, and the dilemma of the single-vendor ecosystems. By exploring these challenges in a certain depth, we will try to derive knowledge and insights needed to navigate some ethical complexities of AI adoption to establish a safer and more sustainable approach to business.
Nextcloud Hub is an AI-powered collaboration platform that offers a freedom of choice when it comes to AI hosting, sourcing an appropriate model and choosing the right approach to AI integration.
Nextcloud makes and ongoing effort to promote the ethical use of AI tools. To assist our users, we employ our Ethical AI Rating to help them choose the right match for their limitations and principles of their business.
Biggest tech giants like Google and Facebook claim to scrutinize their development and use of AI, addressing issues such as bias, privacy, and accountability. Creation of special research boards, drafting ethical guidelines, and participation in various forums and high-profile collaborations help secure a position of the opinion leader and ambassador of the AI Ethics.
Those initiatives also serve as a differentiator on a competitive market and help companies improve their public image where consumer trust means everything. And unfortunately, Ethical AI adoption turns out to be a simple windowdressing where profit motives prevail over the ethical pursuits as we see big AI providers terminate their ethical teams amid growing AI product investments:
Responsible innovation efforts and dedicated ethical teams seem like a veneer of ethical responsibility while the deeper, systemic issues inherent in AI deployment remain unchallenged.
One of these challenges is with transparency of AI training practices.
Even though companies are legally required to inform users about how their data is processed, some AI providers like Meta start collecting vast amounts of data from user content, while the policy is very hard to opt out of. And this is not the only example of how tech giants cut corners to harvest data for AI training when running out of supply.
For example, in 2021, Open AI reportedly transcribed over one million hours of YouTube videos to feed data to ChatGPT. While according to two members of the privacy team at Google, in 2022 company wanted to expand the use of consumer data for AI training, including publicly available content in Google Docs, Google Sheets and related apps.
Data privacy regulations play a crucial role in governing the use of AI technologies, ensuring that individuals‘ privacy rights are protected and that AI-powered applications are used responsibly. From a perspective of a company employing AI tools in their business, compliance with such regulations is essential.
In the European Union, such regulation is provided by the General Data protection Regulation (GDPR) and the AI Act, an embodiment of the common regulatory framework for AI. In the US, the National Institute of Standards and Technology issued the Risk Management Framework (AI RMF), which is a guidance to companies and other entities on using, designing and deploying AI. However, this framework is voluntary and does not call for penalties for noncompliance.
In some regions, legislation also controls collection and processing of specific types of data that companies can collect unconsciously, for example protected health infiormation (PHI) and biometric data that may be collected through various AI-powered health apps. In the US, Health Insurance Portability and Accountability Act (HIPAA) and the Illinois Biometric Information Privacy Act (BIPA) aim to regulate collection of health data.
Ethical AI certification standards are designed to ensure that the development and deployment of AI technologies align with societal values, promote trust, and mitigate potential harms. While noncompliance does not lead to any legal penalties, companies still face certain risks, such as reputational damage and affected stakeholder relations. Examples include IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, EU Ethics Guidelines for Trustworthy AI, ISO/IEC JTC 1/SC 42, AECP, RAIL, and more.
Regulations vary by region and industry, and the first essential step is to research data protection laws relevant to your business. Noncompliance, even if unintentional, may lead to serious consequences including both fines and the reputational damage. While those regulations enforce various policies, there are common principles to bear in mind that can help minimize risks:
We provide our customers with direct consultation services and multiple resources to support their compliance efforts. This includes a high-level 12-step checklist offering an overview of key compliance requirements and a detailed administrator manual providing concrete, hands-on guidance for implementing compliance measures effectively.
The algorithms and data usage policies used by public AI services lack transparency, which makes it difficult for organizations to ensure ethical AI practices. Besides, when employees use publicly available cloud-based AI tools as a means of work, it may cause compliance issues and privacy risks related to data location:
Hosting AI tools and their data locally is crucial for ensuring robust data privacy and compliance. While cloud-based solutions offer flexibility and scalability, local hosting provides the control and assurance necessary for handling sensitive or regulated data effectively.
AI services provided by the big tech vendors are often highly integrated with their products, providing smoother operations and better overall performance on the user side. However, the risks of being locked in the mono-provider ecosystem include little flexibility as to what tools to employ and major dependence on the vendor’s decision making and product management. That also creates strategic risks, not only for the companies, but society as a whole.
After Danish privacy regulator ruled against sharing student’s data with Google, the company reportedly promised to change the way they process data to continue supplying Google products to Denmark’s schools. That means that the schools can avoid short-term disruptions, and also save funds as companies like Google and Microsoft make their products accessible for educational organizations.
However, in the long term, by providing kids with their proprietary technology companies get a strong grip over their future choices — with kids habituated to using the products and sharing their data since childhood.
Similarly, it is important to act strategically when adopting the AI in the daily life — a negative change can happen gradually and go unnoticed until it’s very late. While today students‘ data isn’t used to train the foreign big tech AI en masse, tomorrow it might be, given the steady growth of AI popularity. As kid, you will leave school and keep using Google, because it has already trained a personal AI for you and any other product will be hard to adopt.
Vendor-independence should be a part of the long-term strategy. European organizations are better off using European AI that is part of a sovereign ecosystem, hosted in Europe and trained on a local data.
Nextcloud Hub is the most popular self-hosted collaboration platform that integrates file sharing, document collaboration, groupware, and videoconferencing tools in a single modular interface. It is secure and private by design and gives you ultimate control over your data and ensures maximum compliance.
Realizing the big potential AI has for our daily life and work, we powered Nextcloud Hub with AI features that give you performance, but also let you care about your privacy. It features the Nextcloud Assistant, an AI-powered interface that enhances your entire platform with versatile automation features, augmented communication and content creation instruments. You build your AI stack the way you want it, with multiple apps, models and deployment formats available.
Discover Nextcloud Hub, privacy-first open-source solution
for business collaboration that puts you in a driver’s seat.