Escape the Lock-In: Zero-Downtime Migration from Artifactory to Open Source Solutions
We used JFrog Artifactory internally for several years, but it was mostly a network file server, without utilizing more advanced features. It stored everything from compiled JARs for applications and deployment images to test assets, mobile app builds, and various artifacts between our pipelines.
Over time, Artifactory became more of a hassle. Routine maintenance, including garbage collection and cleanup of older files, became a constant chore. I ended up creating a monthly ticket to delete unused or outdated artifacts. And despite our efforts, it still hit critical issues, including service outages due to running out of disk space.
As our infrastructure matured and disaster recovery (DR) and high availability (HA) requirements became more pressing, we ran into licensing roadblocks. Supporting these use cases in Artifactory meant upgrading to more expensive tiers, which made us rethink our use of the platform.
Defining Our Requirements#
We decided to replace Artifactory and started with a simple checklist of requirements:
- Simple deployment, ideally containerized and compatible with ECS or Kubernetes.
- Internal network accessibility.
- Drop-in compatibility with our existing artifact URLs and structure.
- Unauthenticated read access.
- Authenticated write support (via IAM).
- Support for existing artifact writers, including Jenkins and Fastlane.
- S3 as a backend to offload storage scaling and lifecycle management.
- Zero-downtime migration
The Solution: nginx-s3-gateway
#
We landed on nginx-s3-gateway
, which met our needs and was flexible enough to use our existing tooling to handle the rest.
It took just minutes to deploy the container into ECS using our standard infrastructure. IAM roles provided access to a designated S3 bucket, and since S3 supports native cross-region replication and offers 11 nines of durability, we were confident in the reliability of our storage layer.
We ran the gateway behind a load balancer with two replicas for redundancy. For disaster recovery, we planned to spin up a second set of replicas in our failover region. Early testing confirmed that reads and writes worked seamlessly, URL formats matched, and compatibility was intact.
Achieving Zero-Downtime Cutover#
To minimize disruption, we opted for a dual-write strategy before flipping DNS:
-
DNS Records: We created two new records, one for the original Artifactory and one for the new S3 gateway which us to migrate gradually.
-
Jenkins Integration: Jenkins, our primary artifact writer, was easy to update. Each job that uploaded to Artifactory was modified to also upload to the new S3 location using IAM credentials that are already in use for each ECS task. The Artifactory configuration was also updated to use the new url so it would work even after the DNS changes.
-
Data Migration: We exported ~500 GB of existing data from Artifactory using the JFrog CLI’s
download
command. From there, we uploaded the artifacts to S3 using the AWS CLI.
With everything in place, we changed the primary DNS record to point to the S3 gateway’s load balancer. We monitored traffic, builds, and deployment pipelines closely—and everything just kept working.
One Hiccup#
The day after migration, we found one upload job still writing directly to the Artifactory API. The Fastlane integration was using the Artifactory Ruby gem. We quickly updated it to use the S3 upload method, and all was well.
The Results#
- ✅ No downtime — users experienced zero disruption.
- ✅ No tooling changes — most teams didn’t even know we made the switch.
- ✅ Lower cost, higher flexibility — no licensing headaches, and S3 lifecycle rules handle old files.
- ✅ Simplified ops — easier to maintain and extend with fewer moving parts.
What’s Next#
- Decommission the legacy Artifactory instance.
- Deploy DR replicas of the S3 gateway in our failover region.
- Add S3 lifecycle policies to clean up older files in specific paths.
This migration proved that you don’t need expensive commercial tooling to support artifact storage at scale, just careful planning, the right open source components, and a bit of DNS magic.