Warning: these instructions are given without any warranty.
If you decide to follow them, you are on your own, and it is your responsibility to verify that everything works as expected.
I do not take any responsibility for data losses or any other consequences.
Duplicity supports Rclone as a backend, and Rclone is compatible with hubiC in turn.
First, follow these instructions to configure a hubiC remote in Rclone.
Then pass something like
rclone://hubic:/your-backup-container to Duplicity as a target.
Notice the second colons! Firstly, I missed the second one, and it did not work for this reason.
Also, notice the lack of a trailing slash: if you add it, the first backup will succeed, but the following ones will fail.
Fortunate and unfortunate coincidences
You know they say «💩️💩️💩️💩️ happens.». For example, data centers catch fire.
I waited for a bit but then bought another instance. Eventually, the old one was also reactivated, and I preferred moving back to it.
OVH did not have backups, but I had been diligent, and I did them 😎️. But since the OS was installed more than 3 years earlier, I preferred starting from scratch for the second time.
And I made two mistakes:
- I forgot to change the destination e-mail address for backup errors (and for some reason, the alias for
rootdid not work);
- I have never tested the backup manually.
Then, for a coincidence - a fortunate one, this time - I checked backupninja’s logs, and I found a problem.
The previous solution
hubiC is based on OpenStack, and it is almost compatible with its APIs, except for the authentication.
However, Duplicity developers knew this and provided all the necessary to interface with it.
In particular, they used
pyrax, which is an old and now deprecated library to interface with OpenStack containers. I guess they never updated it because hubiC does not accept new users.
My distribution does not package this library, so I installed it with
pip, which I do not consider good practice. However, it worked for quite a long time.
But after almost one year without backups, I found that it now has some bugs. In particular, it implicitly converts Python
str instead of using the
decode method. As a result, files are uploaded as
b'filename', instead of just
A better solution
The solution described so far is very… hackish.
However, I noticed that Duplicity supports Rclone as a backend. If you do not know it, Rclone is a powerful CLI tool to interact with several cloud storage providers, hubiC included.
hubiC and Rclone
Setting hubiC up with Rclone is straightforward:
- start the
nto create a new remote;
- choose the remote name you prefer;
- tell it the provider is hubiC;
- follow the on-screen instructions for the authentication.
If you are working in a remote machine, like me, you have two possibilities to perform the authentication. One is running rclone on your PC only for the authentication, but it is more involved.
The other one, if you use SSH, is running a tunnel to port 53682:
ssh -L 53682:localhost:53682 you@your-server
In the latter case, you need to tell that you want to use the auto-configuration.
Duplicity and Rclone
Duplicity wants you to pass a target URL as an argument.
For Rclone, the URL must look like
The container you choose will be created automatically at the first backup run; you just need to invent a name for it.
Please notice the two colons (:) in that URL. Without the second one, rclone saves your files on your disk instead of uploading them!
Omitting any trailing slash is crucial as well. Initially, I had it, and the first backup succeeded, but the following incremental ones failed.
Apart from this, I did not find any insurmountable difficulties, and I could finally use hubiC with Duplicity again.
hubiC had an attractive offer: you could have 25GB of redundant storage in European servers for free. And the paid offers were affordable as well. Its principal disadvantage was its speed: in practice, it was capped at 10Mbps. I suspect it was too cheap, and maybe OVH was at a loss with it. Therefore, they discontinued it, but they kept old instances active.
However, making it work on Linux has always been quite convoluted. If I had to look for something else today, I could think of Ionos’ S3 storage. The data is in Germany, the service is GPDR-compliant, and the protocol is a de-facto standard. However, it is a paid service. Therefore, I will keep using hubiC until I can.