POTTY (Project Onto Table ThingY)

 

POTTY (Project Onto Table ThingY)

Why

The fact of the matter is - I’m terrible at drawing. I’m amazing at tracing, but just awful at drawing. So when my friend Victor showed me his tracing table - I was like: I have to have one.

Right, so the main principle of a tracing table is that you’ve got a table (I’ve researched it, sounds crazy, but it’s true) with a backlight.

‘Why would you want a table with a backlight, Aleks?’. Well, reader, say you’re doing animation. Then you’d like to draw your character, preferably on a thin sheet of tracing paper, put it on the tracing table, put a clean sheet of tracing paper on top and redraw the same character adjusting for their movement. Like so:

Note: I’ve never actually pro-actively researched tracing tables. This is how I’ve always imagined they worked, based on what I’ve seen in random youtube videos. The main idea seems to be correct anyway.

Planning

The crucial thing to remember is that I’m good at tracing, but terrible at drawing. So how about I make my table project a chosen image onto its surface, then I can give up on ever learning how to draw and hone my tracing techniques.

So what should the table be like? My first idea was to get a normal table and, somehow, with the very scientific help of a bunch of nails and a hammer, integrate a tablet, make it minimally responsive to touch and there you have! That’s not ideal though. It’d need to be a pretty large tablet. Will it stick out? What if I make a tablet-shaped hole in the table? But then, what if I decide to change the size of the tablet? What if I wanted to use the table for something else when I’m not tracing? Will that mean that whenever I’m not using the tablet for tracing - the table would contain a huge ugly rectangular hole? Ugh, this sucks. Okay, alternatively, I can cover the hole with a sheet of glass, and get rid of the tablet idea altogether. Maybe use a light or a projector fixed at the bottom of the table projecting into the hole. That still means buying and mutilating a table though. Ugh.

The projector idea is good, though. Let’s work with it. How about I get a table with a glass top and then fix a projector to the bottom of the table? Perfect.

How would the projector know what to project? Well, I have a RaspberryPi that’s sitting around not doing anything. How about this: I upload images I want to trace to… wherever - ‘the cloud’, then when I switch on the RaspberryPi - it connects to ‘the cloud’, downloads the images and the projector projects them onto the table. So far so good.

Let’s do an inventory of things I need:

  • Table with a glass top;
  • Short-throw projector (since the table probably won’t be several meters high);
  • RaspberryPi;
  • Some sort of an online file storage solution, preferably free;
  • A tool to upload the files to the file storage (let’s say written in Java);
  • Another tool to download and display the files (let’s also say written in Java).

Hardware

So let’s start with the easy part. A table with a glass surface. To cut a long story short - I’ve tried several furniture stores (yes, including IKEA) and they have tables with glass surfaces, sure, but the surfaces have patterns on them. For a bit I’ve considered getting one anyway and scraping the pattern off, but then figured it might be: a) too much effort; b) not a good idea for the tracing surface to have scratches and stuff.

Okay, let’s build our table ourselves.

I ended up purchasing two sets of table legs in IKEA - Oddvald.

Upon consulting with Mike (the handiest IBMer I know), I’ve also purchased a 10mm thick transparent acrylic table top from Simply Plastics. The reasoning for acrylic was that it’s less fragile than glass and the thickness of 10mm should be thick enough for it not to be bendy. The total sheet size I went for was 100x73cm.

Now, the projecting part. I needed a short-throw projector, since the distance from the floor to the surface of the table is about a meter. Oh yeah, by the way, ‘short-throw’ pretty much just means that it can project sharp images at close ranges. Since I didn’t want the projector to lay on the floor (back to this in a minute) - it’ll have to be attached to something, so that reduces the range to about 80cm. I’m also a cheapskate (proven fact) and didn’t want to spend a ton of money on it. Finding the right projector took a while (and multiple tries), but I ended up with this beauty - iOCHOW IO2 Mini Projector (not a particularly catchy name, I know). Basically, it ticked all the boxes that needed to be ticked: short throw, HDMI port, at least 1200 lumens.

Unfortunately for me - iOCHOW IO2 Mini Projector doesn’t have a horizontal keystone (that thingy that allows to project at an angle without distorting the image). So I can’t directly attach it to one of the table legs, project onto the centre of the table top and live happily ever after. Well, no worries. I’ve found a euro-palet on the streets of Stockholm - a fairly clean one, in a good condition. Borrowed a circular saw from one of my co-workers and, dare I say, friends. My first instict was to cut out a few interconnected boards and make a sort of a wooden mesh that you can attach to the bottom of the table, that would allow to put the projector pretty much anywhere under the table for greater flexibility. Turns out - the circular saw is pretty damn loud, my neighbours aren’t as friendly as I thought, and taking huge nails out of euro-palets is an exhausting procedure. So I ended up using just one board and discarding the rest of the palet.

You may not know, but the wood in the palets isn’t treated - it’s rough, coarse and irritating (and it gets everywhere… wait what?). So I’ve also borrowed a sanding machine (from the same co-worker), yeah it isn’t particularly quiet either; it was not a happy weekend for my neighbours. Sanded the wood. Then primed it. Then painted it black. Then attached it to the table.

As tempting as it would be to just nail the projector to the board at this point - I needed a clean, neat, repeatable way to attach it to and detatch it from the table. Before I get to that - let’s talk briefly about what the projector will be projecting. Well, pictures, of course, duh. It will be connected through an HDMI cable to a RaspberriPi that will somehow be getting the images and displaying them. My RaspberryPi has a Lego-themed box, by the way. So why not use Lego to make a projector/RaspberryPi holder for the table? Cool. I purchased a pint of Lego, tried a few simple designs, found the one that worked, used super-glue to stick the Lego pieces together to make sure they don’t fall apart and ta-da!

I’ve also drilled a hole in the Lego-stand to use a bolt to tighten it up against the board, because it was wider than the board by about 2-3mm and that made the whole thing wobbly.

So, the projector goes into the side-area of the Lego-stand, connects through the HDMI cable to the RaspberryPi, which is attached to the top of the stand, an external hard disk can be connected to the RaspberryPi for more space. Both the projector and the Pi are connected to a socket and the finished table looks something like this:

Software

Software was so much easier to make. But, perhaps, the main lesson of this whole endeavor is that writing software before sorting out the hardware wasn’t a great idea. Anyway.

First, I settled on a cloud storage solution. Amazon’s S3. Like, I didn’t even do much investigation - it’s free, there’s a lot of space, there’s an API. Done.

Next, I think of myself as a Java developer. I like Java, so I’ve decided to write the stuff I needed in Java. You might go: hold on, Aleks, writing a Java program for RaspberryPi when you can solve the problem perfectly through the magic of Python? Yeah, I know.

So here’s the deal. Tracing photos is fine. I can take a photo, upload it to S3 manually and then I’ll just need a way to get it from there and to the projector. But what if I want to learn how to draw better? What if I wanted to understand movement, physiology? What if I wanted to trace a scene in a movie, maybe modify it in the process (that might be rotoscoping)? Now that would be interesting!

I’ll need a program, let’s call it something cheesy - I Was Framed (because I’ll be working with frames, get it?). I can have ‘I Was Framed - In’ to get images into the RaspberryPi and ‘I Was Framed - Out’ for uploading frames to S3.

I Was Framed - Out

So what do I actually want to do? I want to give a video to a program, define the frames I want, click a button and have it upload the frames to the cloud. Easy.

There’s four classes in ‘I Was Framed - Out’: AWSUploader, IWasFramed, Utils and VideoProcessor. You can find the code here.

Utils is basically there to hold constants and do boring stuff - like turn a time String into long of milliseconds or do cleanup.

IWasFramed is the main class that also contains all the UI stuff. In theory I could’ve moved the UI stuff into its own class, but I didn’t, so here you have it. I’m using JavaFX because at the time of writing the code I was excited to try something other than awt/swt/swing (oh god, swing).

It’s fairly simple - there’s a stage, it contains a file picker that only shows specific file extensions; there’s a bunch of fields that allow the user to pick a start and end time in the clip; the user can also pick the length of a step between frames. You might want to store the frames locally as well, so there’s a tickbox to do that. There’s also a progress bar, because the whole process might take a while and it’s preferable to be able to see if anything is happening. And finally, there’s three buttons Rupload (Run and Upload), Stop and Exit.

So, for example, you might want to get a frame every third millisecond between the 00:01:30:000 and 00:01:40:000 of the video and delete all the locally created frames after you’re done. You’ll do something like this:

Now VideoProcessor is where the cool stuff is at. I’m using the Xuggle library, which makes working with video in Java so easy - it’s crazy! First - we get all the values from our fields in the UI and determine the length of the video by creating an IContainer, passing the filepath in and, literally, calling the getDuration method. Which, of course, returns the length of the video in milliseconds, so we divide the result by a thousand.

        IContainer container = IContainer.make();
        container.open(filename, IContainer.Type.READ, null);

        return container.getDuration() / 1000;

VideoProcessor is a Runnable, it creates an IMediaReader with the passed file and goes into a loop until it’s either done or the thread is interrupted.

        do {
            if (Thread.interrupted()) {
                logger.warning("thread was interrupted");
                return;
            }
        } while (!done && reader.readPacket() == null);

While it’s looping - it goes into the onVideoPicture method. It uses all the values we got before: length of the video, start time, end time, step length. It checks how much time has passed since the last frame save, if it matches the step length - it stores the current frame, otherwise - increments the time and tries again. When the current time is greater than the end time provided by the user (or the total length of the video) - it terminates. Oh, and it also updates the progress bar as it goes.

So we’ve hit our termination reason in the loop above, we call the uploadToAWS method. This method instantiates an instance of AWSUploader (more on it later) and passes it, as well as the directory containing all the frames, to the uploadThatStuff method (ingenious naming, I know). The uploadThatStuff method goes through files in the provided directory and calls AWSUploader.uploadImage on each of them (and also updates the progress bar). After the upload is done the deleteLocal method is called to clean up if the relevant box was ticked and that’s pretty much it.

On to the AWSUploader. That’s where the uploadToAWS method takes us. It’s a really short class. I’m using AmazonAws, if there was any ambiguity. Through the AWS console I’ve created an IAM role with the rights to write to an S3 bucket. AWS generates a bunch of keys that you can then store locally in a super-special place (interestingly, if you screw up and accidentally push them into your git repository - Amazon disables them and emails you saying that you dun goofed). Which makes uploads ridiculously easy:

        String keyName = dirName + "/" + image.getName();

        AWSCredentials credentials = new ProfileCredentialsProvider().getCredentials();  // obtained from the super-special secret file
        AmazonS3 s3client = new AmazonS3Client(credentials);

        PutObjectResult result = null;

        try {
            result = s3client.putObject(new PutObjectRequest(Utils.BUCKET_NAME, keyName, image));
        } catch (Exception e) {
            e.printStackTrace();
        }

So I’m storing the image as directoryName/imageName.png and S3 figures out that there should be a directory called directoryName in the bucket and stores the file in it(it’s cool like that).

And that’s that.

I Was Framed - In

Okay, and what about the part where I actually get the images from S3 and onto the RaspberryPi.

Initially I’ve taken a similar approach and created a JavaFX application, which totally worked. Then it turned out that JavaFX didn’t want to be friends with my RaspberryPi, more specifically Oracle removed JavaFX support for ARM. So, being very grumpy and equally vocal about it - I went back and re-wrote ‘I Was Framed - In’ using swing and awt, ugh.

The code is here. It looks like there’s a lot of it, but it’s the simplest thing ever.

IWasFramedIn is the main class. It calls AWSHandler’s fetch method and ScreenController’s prepareFrames method. What do they do though?

AWSHandler.fetch uses the AmazonS3ClientBuilder to get a connection to S3 in a way similar to how the Out-version does it. Then it calls getBucketContents that uses the connection to obtain an ObjectListing of the contents of the bucket that then gets turned into an array of string representations of those.

        ArrayList<String> dirs = new ArrayList<>();
        ObjectListing objects = s3Client.listObjects(Utils.BUCKET);
        List<S3ObjectSummary> objectSummaries = objects.getObjectSummaries();
        objectSummaries.forEach(summary -> dirs.add(summary.getKey()));

After we get the bucket contents, we pass those to the saveFiles method. The first thing it does is check whether any of those files are already stored locally and if they are - removes those specific files from the array. Then we loop through the stuff that needs to be downloaded and download it (the naming could’ve been better).

        for (String dir : nonExistentDirs) {
            S3Object s3Object = s3Client.getObject(Utils.BUCKET, dir);
            File outputFile = new File(Utils.HOME_DIRECTORY + dir);

            if (dir.indexOf("/") == dir.length() - 1) outputFile.mkdir();
            else {
                File parentDir = new File(Utils.HOME_DIRECTORY + dir.substring(0, dir.lastIndexOf("/")));
                if (!parentDir.exists()) parentDir.mkdirs();

                outputFile.createNewFile();

                try (InputStream objectData = s3Object.getObjectContent();
                     FileOutputStream outputStream = new FileOutputStream(outputFile)) {
                    byte[] bytes = new byte[1024];
                    int read;

                    while ((read = objectData.read(bytes)) != -1) outputStream.write(bytes, 0, read);
                }
            }
        }

There’s one more method in this class, which deletes a file/directory from the S3 bucket, it’s called from outside of AWSHandler when we - the user, decide that we’ve done all we wanted with the stuff we were working on.

ScreenController is there to do two things. prepareFrames creates the three types of frames we have (here: Java swing JFrames, not video frames) - DirListFrame - lists all the directories with video frames that we have in S3 and uses a simple DirListCellRenderer to make the entries more readable, which I think I’ve appropriated from somewhere else and modified slightly; ImagesFrame - displays the first image in the chosen directory and allows to cycle through images by button mashing; WhiteFrame - for those days when we don’t want to trace existing video frames and just want a well lit background.

ScreenController also has a setToWhite, setToDir and setToImages methods that allow to change the current frame to a different one, all three are called from different frame classes.

Honestly, the logic inside the frame classes is neither complicated, nor interesting, (neither is the logic in this program’s Utils class) so I’ll skip the explanation, if you’re really curious - here’s the link to the repo again.

And finally, it’s all compiled into a runnable jar and set to autorun on system boot up of my RaspberryPi. Et viola!