Opened 12 years ago

Closed 10 years ago

#506 closed enhancement (wontfix)

P2P distribution of content - self-hosted CDN (content distribution network)

Reported by: arthur.lutz Owned by:
Priority: minor Milestone:
Component: programming Keywords: cdn, p2p, inter instances, federation
Cc: Parent Tickets:

Description

While thinking about the limitations of our asymetric broadband access
(less upload than download), I thought of a hack that could speed up
things while browsing an HTML generated photo gallery.

The idea would be to host the same album on N servers and when you
generate the "thumbnails" page you cycle through the N servers for the
<img src=> tag.

In the first place, you could reference identical albums on other servers
manually, but then another feature would be to "copy" the album to your
gallery and it would automatically reference the other one (and vice
versa).

The setup seems complex but could be really simple. The idea is to use in-
house devices to set this up. For instance on NAS devices there is usually
a small application to publish the photos that are on the NAS, generally
it's either a php app or a html generator. This would be the kind of setup
where in a familly each member has a NAS device and when looking at a
gallery, they can copy it to their NAS and then it is distributed.

Change History (3)

comment:1 by arthur.lutz, 12 years ago

I know that a plugin in wordpress has this "sort" of approach, but in a very limited way. W3 Total Cache enables you to configure a self-hosted CDN but I think it is limited to 1 ftp site, and the you configure a subdomain that is used.

It would be nice to avoid having to setup complicated subdomains that you have to distribute to the other peers.

I haven't taken the time to go and read the work that is being done in http://webp2p.org/ but it would be nice to have such a feature that does not depend on very recent browsers.

comment:2 by Simon Fondrie-Teitler, 11 years ago

Component: infrastructureprogramming

comment:3 by Ben Sturmfels, 10 years ago

Resolution: wontfix
Status: newclosed

So if I'm understanding correctly, you're thinking of a system where media files would be pushed out to neighbour MediaGoblin instances ahead of time so that when a visitor comes along, the bandwidth they use can be spread out across multiple instances, rather than hitting one instance hard.

That sounds super cool, but also super hard.

Hard like a cross between Tahoe-LAFS and Bittorrent, and something that would likely lead to several academic papers. The system would require a protocol to evenly distribute the media, synchronize the files, keep track of where files are stored, handle instances joining and leaving the network, handle outages, and prevent malicious use and trust between nodes.

Alternatively, it's possible you could have a less automated approach that depended on having ownership of multiple nodes yourself and carefully configuring and maintaining them to work together. MediaGoblin isn't trivial to deploy as is though, so I don't know how practical this would be.

I'm going to close this request as I don't think it's within reach in the next few years. If you have some thoughts about build something likes, please feel free to reopen the request.

Note: See TracTickets for help on using tickets.