We’re elbow deep in making butter app into an API, that apps can be written around.
I’m just going to start with a high level explanation of how the pieces fit together.
Layer 2: The Modules. A module is a piece of butter that can be used to create a butter app. Initial modules will be a previewer, a track editor, and a timeline.
Layer 3: The API. This is the communication layer between modules and the developer. The developer uses the API to register modules, and the modules use the API to communicate to other modules.
I’ve been working on the timeline module. I have a demo up. This module is where the track event visualisation happens. Basically, you can add track events to the video, displayed in a timeline. You can then move a track around once added, to change its position in the timeline. It also has a timeline that can interact with the video’s time, seeking and scrolling around.
I explored a few ways to display the timeline. The two main things my solution needed to solve are, it had to be fast, and be able to represent a really long video’s time, without bogging down the browser. A canvas has limits on how large it can be.
So, I figured I would display the timeline in HTML, that way it wouldn’t have a canvas that is too large, and should be fast considering it’s just HTML. This had the problem of not scaling very well, as some pixels would blur, and other would be crisp, depending on the allowed space. I tried three HTML based methods. Displaying images in css using background-image, appending image elements, and even tried hacking empty divs with borders to be used as the timeline’s lines. All of these had scaling issues.
This left me with canvas. I could either render one large canvas once, or I could draw a smaller canvas, and just update the position that is currently being viewed, and as the user scrolls around, it would update the offset. The first one was no good, as a large canvas might become and issue, and the second worried me because it was a performance hit I wanted to reduce if possible. I also considered drawing an offscreen canvas, and displaying its contents in an image using a dataURI of the offscreen canvas.
I ended up with a canvas, being rendered once. I solved the problem of it being too large by allowing it to be scaled down, while retaining the video’s whole time.
I first draw a bar one pixel in width for each 1/4 of a second, with the half second and full second being 3 or six pixels longer, respectively. This wasn’t too hard once I calculate how many pixels a second is based on the video’s duration in seconds and containers with in pixels.
The next step was to draw timestamps, in SMPTE format, along the timebar. Once the timebar was scaled down, the timestamps wpuld start to overlap, so I had to account for this and make sure I only draw a timestamp for the first whole second that falls within the allotter space in pixels.
It all turned out very well, and all that is left is making a github commit out of it, some notes in the ticket,
blog and to merge it in with the other two modules.