Archive for October, 2010

Popcorn.js Website and JavaScript commands

Posted in open source, Uncategorized, video, webmademovies on October 18, 2010 by scottdowne

Hey, so Popcorn.js has two new commands.

website command, which will display a website in the targeted div, and a JavaScript command, which takes two parameters, a start function, which is to be fired on the events in time, and a stop, which is fired on the events out.

Example of the website command is:
<webpage in=”00:04:00:08″ out=”00:04:19:00″ target=”iframes” src=”” width=”100%” height=”100%”></webpage>

I just noticed it should probably be a: <webpage />

The script command is simple:
<script in=”1″ out=”10″ start=”customJS.start” stop=”customJS.stop”/>

    var customJS = (function() {
      var interval;
      var content = document.getElementById("content");
      function work() {
        var item = document.createElement("p");
        item.innerHTML = "working...";

      return {
        start: function() {
          var item = document.createElement("p");
          item.innerHTML = "starting...";
          interval = setInterval(work, 1000);
        stop: function() {
          var item = document.createElement("p");
          item.innerHTML = "done!";

Using the above as the function to call.


grafiti markup language (GML) html5 player

Posted in open source on October 8, 2010 by scottdowne

I need to introduce everyone to Grafiti Markup Language, to quote “Graffiti Markup Language (GML) is a specifically formated XML file designed to be a common open structure for archiving gestural graffiti motion data into a text file.” Which is a very cool project and something I am getting excited to work on.

Today and yesterday I hacked together an HTML5 GML canvas player for displaying GML. I am basically porting this flash player into HTML5. This is the previous work done in this area by Jamie Wilkinson and for less than two days of work I have something to show. Still need to polish it up though. The original player had massive improvements by Nick Cammarata.

I still need to add support for the play and seek bar hiding when you scroll off of it, but that should be trivial, plus I want to do some optimization.

Universal Subtitles + Popcorn.js

Posted in open source, video, webmademovies on October 6, 2010 by scottdowne

When we were finishing off the first demo and initial popcorn.js work. Popcorn is a library for defining all sorts of semantic data to a video keyed on time, so when the video hits this time, show this data. We kept our data in xml and quickly noticed writing all that xml was not a fun job, and not necessary. As a programmer, I feel xml is meant to be written by a machine, and not a person, so how can I expect a film maker to write it?

We talked about later going back and writing a front end gui page that could be used to create data, and spit out an xml file you can download.

We also knew of a project called Universal Subtitles which was another open source project that was very similar. I am not sure who contacted who, but we ended up collaborating with them so their interface could be used to create a popcorn file of semantic data. We started with wikipedia, google maps, and twitter, then later put in the base subtitle.

This is much better than our initial plan of downloading an xml file, because now the data is kept in a central location, instead of on someones local machine, making it easier for everyone to share their subtitle work.

The idea of how to use this is, you subtitle a video keyed by the video’s url using Universal Subtitles, and then for the popcorn data source, you just use that same link, and popcorn should take care of the rest.

This is still a very early hack, and not yet released on either Universal Subtitle’s official webpage, or in the most recent version of popcorn… yet.

Another thing I would like to mention is adding a geo location required finding the latitude and longitude of the location, then pasting that into the xml, which was also not fun. So I created a bookmarklet that would should a google map, you could pan to your location, and hit done and it would paste in the latitude and longitude for you.

Open Subtitles Design Summit 2010

Posted in open source, video, webmademovies on October 1, 2010 by scottdowne

Today I attended the Open Subtitles Design Summit 2010, and just wanted to quickly jot down some of what i took out of it.

The first discussion was on how to connect the subtitle with the video, so if someone has a video, they can search for the subtitle info without having to write it themselves. A few problems with this:
– Copyright issues. If someone makes a video, someone else cold subtitle it, put it online, without the content owner even knowing that this happened.
– Synching multiple video links to the same video with a key, a unique key, so there doesn’t exists two subtitle files for the same video. The problems with this is if a video is essentially the same by name, but one has commercials, thus is no longer the same from a technical standpoint. An idea to solve this is to get a finger print of the audio data, and as far as subtitles are concerned, that is the exact same video.
– The final main problem I took from this is where to host this subtitle data that people could look up. Do you do it in one place like a database, or do you spread it around like a torrent.

Second discussion was on video meta data. That term can be thrown around without a clear definition.
– The main thing I got from this is that there are three types of meta data. Data about the video, like the size, length, or other technical details. Data about the video’s contents, like the people in the video, the people that made it… content. The final one, which is timed meta data, like a subtitle starting at time x and ending at time y. The concept of timed meta data helped me understand it, because it made me see the object inheritance of meta data. If you look at it this way, the video is data, and the subtitle is meta data, and the start and end time is meta data about the subtitle, and not the video itself. So maybe there is only two types, with subtitles being about content, and the start and end being technical meta data about that subtitle.
– Another concept that interested me is that some of this data will never change. Like director. The director of a film will always be the director, but something like rating will fluctuate. Both are not new types of meta data as they are both data about the content, so content or technical meta data can be fluid OR concrete.

Fourth discussion was about popcorn.js and where we can go with it.
– We need to consider our xml format, and make it standard, because we met which is doing something very very similar, in a very similar way, so we have to take a closer look at the xml and see how we can create a standard between at least the two of us, to get the ball rolling.
– Also, because of the discussion on meta data, I have a clear description of what popcorn is… timed meta data.
– Having a command to link into the api of what people are listening too. Just a new command to add to the library.
– Something else that is really interesting is a working standard, which is not yet implemented, but is a timed event listener called oncuechange… Basically, do this function at this time. It can be found here which is very exciting and could improve the foundation of popcorn.js.

Fifth discussion was on openindie service. Which is like a hub for independent video makers to get their videos out there, and send screeners of their work to people.
– They were looking into getting subtitle data much the same way popcorn.js is from universal subtitles, and I think it is important for popcorn and openindie to work together with universal subtitles to get the same data, so it only has one format. This would probably be different than the subtitle info used by and popcorn. The difference being xml and json, and different standards for each when it comes to subtitles and semantic data.

Final discussion was the last little bit on what can be done in the next 3 months, and what can be done in the next 12 months.
– the best thing from this was an idea where we oculd use the audio api to get the raw sound data of a video, and read it, then play one sentence, then pause, so whoever is entering the subtitle can keep up with the video. Write one sentence, hit enter, write the next, etc. This would be ideal and very intuitive. Talked to David Humphrey about this, who is the lead developer of the audio build, who recently had a discussion on this, and he gave the idea of writing a library that would control and deal with when sound of a video changes in frequency, and then you can fire events, or do something else with it.