Demo atdemo.gravixar.com
Gravixar

Creative agency (UAE + Pakistan), 4-year engagement · 2022 – present

LucidLink + Wasabi for a creative agency, replacing Dropbox with a streaming file system that survives event delivery

Replaced a Dropbox + local-disk workflow with LucidLink Filespaces (streaming active storage) backed by Wasabi (low-cost archive). Editors start work instantly from any machine. Adoption hit 80%+ of event delivery and survived the worst part of event production: live-handoff under deadline.

Problem

The team was running on Dropbox plus local copies on each editor's machine. Local disks filled up with duplicate media. Event production wasted hours on uploads and downloads. Editing changes often required going back to the original PC because that's where the live project lived. Remote handoffs during events were a coordination nightmare.

Approach

Looked at three real options: a NAS deployed in both Pakistan and UAE (high capex + lifecycle replacements + duplication with existing cloud storage), continuing Dropbox (simple but slow for active production), or LucidLink Filespaces backed by Wasabi for archive. LucidLink behaves like a local drive but streams what you need on demand, which is exactly the shape an editor needs to jump into projects across machines. Ran a one-month pilot, picked the Frankfurt LucidLink region after testing latency, ran workshops, then rolled out one team at a time. Set the operating rule early: keep the active set disciplined, archive everything finished.

Outcome

80%+ of event delivery now runs through LucidLink. ~1.2 TB active dataset across 3 seats with disciplined pinning. Wasabi handles archive at much lower per-GB cost than Dropbox. The first event after rollout where handoffs happened instantly across machines was the moment adoption clicked. Growing to 5 users as active usage scales past 1.2 TB.

Notes

Before vs. after

Before: Dropbox for shared storage plus local copies on every machine. Local disks filling up with duplicate media. Event time wasted on uploads and downloads. Editing changes often required going back to the original PC because that's where the working project lived.

After: LucidLink Filespaces for active projects (streaming access, behaves like a local drive). Wasabi for archive and backup at lower per-GB cost. Work can start instantly from another machine without copying. Clear operational rule: keep the active set lean, pin only what matters.

Decision matrix

We explored three options:

  • NAS (Pakistan + UAE). Initial investment was high, lifecycle replacements meant ongoing capex, and it would duplicate storage we already had in the cloud.
  • Continue with Dropbox. Simple, but too slow for active video production. The whole problem was the latency on multi-GB project files.
  • LucidLink + Wasabi. Streaming active storage backed by cheap archive. The architecture matched the shape of the work.

LucidLink won because it behaves like a local drive but streams what you need on demand. That's exactly what editors need when they have to jump into projects quickly across machines and locations.

Cost and storage strategy

We structured costs around active vs. archive:

  • Active (LucidLink): ~1.2 TB across 3 seats (400 GB each). Growing as the team scales.
  • Archive (Wasabi): Completed projects, final renders, long-term retention. Per-GB cost much lower than Dropbox.
  • Policy: Avoid keeping large RAW sets in active storage unless necessary. Render and archive. RAW gets archived or deleted.

The operational rule that saved us: keep active projects lean and predictable. Archive renders and RAW separately. Don't let the active set become a junk drawer.

Implementation notes

  • One-month test pilot before any team rolled over
  • Workshops to introduce the change
  • One team at a time for actual adoption
  • Latency was the first thing to solve. After working with LucidLink support we settled on the Frankfurt region for our team distribution.

Project hygiene rules we landed on

  • Pin active projects, including their required B-roll
  • Keep the active set disciplined (don't pin everything "just in case")
  • Same After Effects year/version on every machine
  • Fonts and plugins installed per machine (Filespaces streams the files, not the OS-level installs)

Results

Adoption took workshops and repetition. As always. The breakthrough was the first event where handoffs happened instantly across machines. After that the team stopped resisting.

Concretely:

  • Speed. Less time lost moving data. More time editing.
  • Handoffs. Start work from any machine, instantly. Even during event pressure.
  • Remote delivery. Smoother collaboration with screen share + shared project state.
  • Scale. Growing to 5 users as active usage increases beyond 1.2 TB.

Credits

Testing and rollout was led by Ops, with hands-on support from LucidLink during deployment. Special thanks to the LucidLink team for guidance on server selection, bucket planning, and troubleshooting the early latency issues.

What's next on this stack

  • Automate archive transfers (LucidLink → Wasabi) instead of doing it manually
  • Trigger workflows via monday.com status changes (so "shipped" auto-archives the project)
  • VM-based scheduled sync where needed

next step

Bring me a real operations problem. I'll show you the system before you sign anything.

30-minute discovery call. If we're not a fit, you walk with notes you can use anyway.