Last week, we announced that 9 German education and research organizations moved from ownCloud to Nextcloud together with the TU Berlin. Today we’ve published a case study on our website which details the migration of the TU Berlin and the results achieved, including 50% lower base and 38% lower peak database loads. The end result was faster service to users at lower cost.
TU Berlin’s ownCloud-Nextcloud infrastructure
The Tu Berlin was one of the first universities to provide its own file sync and share solution, already evaluating ownCloud back in 2011. They provided cloud storage within the DFN cloud research program, servicing 16 member research and higher education institutions today. Their service has over 22.000 active users at the TU Berlin, with more at the other organizations and storage for the TU Berlin itself is well over 70 TB, with students allowed a maximum of 20 GB data and staff members up to 100 GB.
The TU Berlin uses load balancers spreading user requests between 4 application servers which run a NGINX/MySQL/PHP 5.6 stack on CentOS 7.3. Data can be found on a GPFS-based storage and a Galera MySQL cluster while LDAP and Kerberos handle authentication and all of it runs in LXD containers managed with OpenStack.
Migration
After Nextcloud won a tender for providing a new file sync and share solution, TU Berlin decided to migrate to Nextcloud 11 which promised significant scalability, security and feature improvements. The team is also interested in exploring Nextcloud Global Scale in the future, for scaling up their DFN cloud service and decreasing the total cost of ownership.
The migration was planned during the course of three weeks and executed in one week, fitting the update window set between the result of the tender and end of the old contract. A test environment built from a copy of production was used to prepare. As part of the migration to Nextcloud 11, the TU Berlin decided to move to NGINX from Apache, which was made possible thanks to the native SAML support in Nextcloud.
The migration process itself was ready in time and worked exactly as planned. The TU Berlin did not need any stand-by support from Nextcloud.
Overall, the migration was like a typical upgrade.
Results
Before Nextcloud 11 was deployed, the TU Berlin faced scaling issues with heavy peak load on the database cluster during working hours. The migration resulted in peak load decreasing by over 38% while off-peak times even show a 49% lower stress on the servers. Thanks to this improvement users experience a snappier experience and faster syncing while the TU Berlin can now grow further without significant investment.
Server load before and after migration from ownCloud to Nextcloud
You can find more details in the case study on our website and Dr.-Ing. Thomas Hildmann from the TU Berlin will talk about the migration and its results at the Focus Friday on the Nextcloud Conference later in August.
Nextcloud Hub 9 te permite estar conectado. Descubre nuevas funciones de federación, automatización del flujo de trabajo, una gran revisión del diseño y mucho más en tu plataforma de colaboración de código abierto favorita.
As the #NextcloudConf24 is just around the corner, we would like to additionally present you with the full program for the weekend. From our keynote speakers and panelists, to live podcasting, lightning talks and workshops, we have a full agenda booked that we cannot wait to experience with you!
Guardamos algunas cookies para contar los visitantes y facilitar el uso del sitio. Esto no sale de nuestro servidor y no es para rastrearte personalmente. Consulta nuestra política de privacidad para obtener más información. Personalización
Las cookies estadísticas recopilan información de forma anónima y nos ayudan a comprender cómo utilizan nuestro sitio web nuestros visitantes. Utilizamos Matomo alojado en la nube.
Matomo
_pk_ses*: Cuenta la primera visita del usuario
_pk_id*: Ayuda a no contar dos veces las visitas.
mtm_cookie_consent: Recuerda que el usuario ha dado su consentimiento para almacenar y utilizar cookies.
_pk_ses*: 30 minutos
_pk_id*: 28 días
mtm_cookie_consent: 30 días