Blog

Final Project Report

Objective:
To create a 3D game (with animations) using the GoogleVR platform running on Android.

GoogleVR Android SDK Graphics code (OpenGL) Native Code:
The structure of the application is like a pyramid. The Application code, which is my game, runs on top of the GoogleVR SDK. This SDK is an added plugin to the standard Android SDK provided by Google. To make all the animations work, I used native (C/C++) functions from OpenGL.

Screen Shot 2017-05-11 at 9.59.34 PM

The diagram below illustrates the input(s) and output(s) of the application.
The user triggers some input. This could be pressing a button or head movement (detected by phone’s motion sensors). The signals are sent to the Android application. On the receipt of these events, I constructed an input in the game application. This input caused an animation (using OpenGL), which was rendered as a VR projection using the GoogleVR SDK.

Screen Shot 2017-05-11 at 10.00.53 PMMajor Components:

  • Android Studio, 2.2.2 or higher.
  • Version 25 of the Android SDK.
  • Gradle 23.0.1 or higher. Android Studio will allow you to upgrade if your installed version is too low.
  • A physical Android device running Android 4.4 (KitKat) or higher for Cardboard apps or a Daydream Ready phone.

Process and observations:

  • Built a simple open gl program which responds to touches on mobile app
  • Ported this app to vr
  • https://www.youtube.com/watch?v=PrZDEv2Cl9Q
  • I could not get the gvr sdk to run a heavy open gl process on my phone with limited resources (nexus 6P!)
  • Splitting window caused crashes
  • So I tried to run this on a server instead

Screen Shot 2017-05-11 at 10.21.32 PM

Motivation: Not all phones can process VR and Graphics intense programs simultaneously without lag and heating up.

Advantages: The phone barely runs anything. All loads on server.

Possible to imagine the following applications:

1.A VR social app totally run in the cloud. All that the clients do is connect and display!

2.VR multiplayer games running on cloud

Demos:

Screen Shot 2017-05-11 at 10.22.25 PM

Screen Shot 2017-05-11 at 10.22.18 PM

 

Animation Project 2

INTRODUCTION
For my class of Technical Animation (15-464) of Spring 17, as my second assignment, I implemented two tasks. The first one was a constraint based cloth simulation implemented on a browser. The second task was a Physics-Engine based simulation which involved collision dynamics.

TASK 1: CONSTRAINT BASED CLOTH SIMULATION

OBJECTIVE
Create constraint based cloth and investigate its behavior.

Keep all the source files (JS, HTML, CSS) in the same folder anywhere in your filesystem. Open index.html with a browser.

VIDEO OF THE ANIMATION
Here it is.

 

TASK 2: RIGID BODY MASS COLLISION DYNAMICS SIMULATION USING jMONKEYENGINE. 

OBJECTIVE
Implement a simple rigid body mass collision dynamics simulator using jMonkeyEngine.

INSTALLATION GUIDE OF THE SOURCE CODE

Before installing, check the licensefeatures, and requirements. Then choose one of these options:

Recommended Optional Optional

You want to…

Get started with jMonkeyEngine

Use jMonkeyEngine in another IDE

Build custom engine from sources

Then download…

You receive…

SDK, binaries, javadoc, sources

Binaries, javadoc, sources

Sources

* The SDK creates Ant-based projects that any Java IDE can import. We recommend users of other IDEs to also download the jMonkeyEngine SDK and choose “File→Import Project→External Project Assets to create a codeless project for managing assets only. This way you can code in the IDE of your choice, and use the SDK to convert your models to .j3o format.

When you build an Java application project that has a main class, the IDE automatically copies all of the JAR files on the projects classpath to your projects dist/lib folder. The IDE also adds each of the JAR files to the Class-Path element in the application JAR files manifest file (MANIFEST.MF). To run the project from the command line, go to the dist folder and type the following:

java -jar “MyGame.jar”

To distribute this project, zip up the dist folder (including the lib folder) and distribute the ZIP file.

Notes:

* If two JAR files on the project classpath have the same name, only the first JAR file is copied to the lib folder.
* Only JAR files are copied to the lib folder.
If the classpath contains other types of files or folders, these files (folders) are not copied.
* If a library on the projects classpath also has a Class-Path element specified in the manifest,the content of the Class-Path element has to be on the projects runtime path.
* To set a main class in a standard Java project, right-click the project node in the Projects window and choose Properties. Then click Run and enter the class name in the Main Class field. Alternatively, you can manually type the class name in the manifest Main-Class element.

VIDEO OF THE ANIMATION
Here it is.

TECHNIQUES USED/SUMMARY OF IMPLEMENTATION
I started with a standard com.jme3.app.SimpleApplication. To activate physics, I created a com.jme3.bullet.BulletAppState, and and attached it to the SimpleApplication’s AppState manager.

public class HelloPhysics extends SimpleApplication {
  private BulletAppState bulletAppState;

  public void simpleInitApp() {
    bulletAppState = new BulletAppState();
    stateManager.attach(bulletAppState);
    ...
  }
  ...
}

The BulletAppState gives the game access to a PhysicsSpace. The PhysicsSpace lets you use com.jme3.bullet.control.PhysicsControls that add physical properties to Nodes.
I used Geometries such as cannon balls and bricks. Geometries contain meshes, such as Shapes. I wanted to create brick Geometries from those boxes. For each Geometry with physical properties, I created a RigidBodyControl.

  private RigidBodyControl brick_phy;

The custom makeBrick(loc) methods creates individual bricks at the location loc. A brick has the following properties:

  • It has a visible Geometry brick_geo (Box Shape Geometry).
  • It has physical properties brick_phy (RigidBodyControl)
  public void makeBrick(Vector3f loc) {
    /** Create a brick geometry and attach to scene graph. */
    Geometry brick_geo = new Geometry("brick", box);
    brick_geo.setMaterial(wall_mat);
    rootNode.attachChild(brick_geo);
    /** Position the brick geometry  */
    brick_geo.setLocalTranslation(loc);
    /** Make brick physical with a mass > 0.0f. */
    brick_phy = new RigidBodyControl(2f);
    /** Add physical brick to physics space. */
    brick_geo.addControl(brick_phy);
    bulletAppState.getPhysicsSpace().add(brick_phy);
  }

I created a brick Geometry brick_geo. A Geometry describes the shape and look of an object.

    • brick_geo has a box shape
    • brick_geo has a brick-colored material.I attached brick_geo to the rootNode
      I positioned brick_geo at loc.
      I created a RigidBodyControl brick_phy for brick_geo.
    • brick_phy has a mass of 2f.
    • I added brick_phy to brick_geo.
    • I registered brick_phy to the PhysicsSpace.
      initMaterial() – This method initializes all the materials we use in this demo.initWall() – A double loop that generates a wall by positioning brick objects: 15 rows high with 6 bricks per row. It’s important to space the physical bricks so they do not overlap.initCrossHairs() – This method simply displays a plus sign that you use as crosshairs for aiming. Note that screen elements such as crosshairs are attached to the guiNode, not the rootNode!

      initInputs() – This method sets up the click-to-shoot action.

In the moment the cannonball appears in the scene, it flies off with the velocity (and in the direction) that you specified using setLinearVelocity() inside makeCannonBall(). The newly created cannon ball flies off, hits the wall, and exerts a physical force that impacts individual bricks.

The location of the dynamic Spatial is controlled by its RigidBodyControl. Move the RigidBodyControl to move the Spatial. If it’s a dynamic PhysicsControl, you can use setLinearVelocity() and apply forces and torques to it. Other RigidBodyControl’led objects can push the dynamic Spatial around (like pool/billiard balls).

 

Animation Project 1

OBJECTIVE
For my class of Technical Animation (15-464) of Spring 17, I created walkcycles on two different platforms. The objective of one of them was to dig deeper into keyframing, and this was explored through Maya. The other task was simply trying to implement some sort of character animation (I did keyframing here as well) in an Android application.

Let me begin with the documentation of the latter task first.

TITLE
Exploring keyframing for a sample of character animation cycles (walk, dance, etc) using Sprite Animation on Android.

INSTALLATION GUIDE OF THE SOURCE CODE

  1. Install the latest Android Studio in your system from https://developer.android.com/studio/install.html
  2. Start Android Studio.
  3. From the Android Studio menu select File > New > Import Project.
  4. Select SpriteAnimation
  5. Build and Run the app. Instructions for the same for Studio are in https://developer.android.com/studio/run/index.html
  6. Android-powered devices have a host of developer options that you can access on the phone, which let you:
  • Enable debugging over USB.
  • Quickly capture bug reports onto the device.
  • Show CPU usage on screen.
  • Draw debugging information on screen such as layout bounds, updates on GPU views and hardware layers, and other information.
  • Plus many more options to simulate app stresses or enable debugging options.

To access these settings, open the Developer options in the system Settings. On Android 4.2 and higher, the Developer options screen is hidden by default. To make it visible, go to Settings > About phone and tap Build number seven times. Return to the previous screen to find Developer options at the bottom.

VIDEO OF THE ANIMATION
Here it is.

TECHNIQUES USED/SUMMARY OF IMPLEMENTATION
Sprite sheets already exist since the first days of computer games. The idea is to create one big image that contains all animations of a character instead of dealing with many single files. They got less important for some time after more and more 3d games showed up – but got their big comeback with mobile devices and 2d games.
An animation strip is the simplest form of a sprite sheet: It’s just placing each animation frame next to each other. All frames have the same size, and the animation is aligned in each frame.
spritestrip

It might also contain multiple lines if the animation is longer – or if several different animations are combined. A tileset is not different from a sprite sheet: It contains building blocks for game levels. This is an example for a tile map:
tilemap

It’s easy for a game to retrieve the sprites since they all have the same width and height. The disadvantage is that the sprites waste a lot of memory because of all the additional transparency.
The developers of game engines are aware of the wasted memory in the simple sprite sheets and started to optimize the space.
The easiest way I found was to remove the transparency surrounding the sprite and shrink it to the bounding box. An even better solution is only to use the polygon containing the sprite.
This allowed me to create sprite sheets that are way more compact and waste less memory. They also shrink the download size for the game.
The easiest way to create optimized sprite sheets is using TexturePacker. TexturePacker is a tool that specializes in creating sprite sheets. The free version allows you to create sprite strips and tile maps.
Structure:
Spritesheets are made up of two parts: frames and cycles.
A frame is a single image (or sprite) from the spritesheet. Each picture of the cartoon character shown above in the image would be a frame. When the frames are put in an order that creates a continuous movement, it creates a cycle. Putting the photos of the character in the order that they were taken produces a “run” cycle since the character is running (as opposed to a “walk” or “idle” cycle).
Coding technique:
The algorithm to obtain the bitmap objects from a series of sprite images is shown below.

// load bitmap from assets
Bitmap btmBmp = getBitmapFromAssets(this, filename);

if (btmBmp != null) {
    // cut bitmaps bmp to array of bitmaps
    bmps = new Bitmap[NB_FRAMES];
    int currentFrame = 0;

    for (int i = 0; i < COUNT_Y; i++) {
        for (int j = 0; j < COUNT_X; j++) {
            bmps[currentFrame] = Bitmap.createBitmap(btmBmp, FRAME_W
                    * j, FRAME_H * i, FRAME_W, FRAME_H);

            // apply scale factor
            bmps[currentFrame] = Bitmap.createScaledBitmap(
                    bmps[currentFrame], FRAME_W * SCALE_FACTOR, FRAME_H
                            * SCALE_FACTOR, true);

            if (++currentFrame >= NB_FRAMES) {
                break;
            }
        }
    }

I built an array of bitmap images after scaling them.

 for (int i = 0; i < NB_FRAMES; i++) {
        animation.addFrame(new BitmapDrawable(getResources(), bmps[i]),
                FRAME_DURATION);
    }

    img.setBackground(animation);

    // start animation on image
    img.post(new Runnable() {

        @Override
        public void run() {
            animation.start();
        }

    });

}

In the next block of code, I added each frame the Animation class which can render those sprites at a defined frame rate. This animation is done in a separate thread as shown above.

OBSERVATIONS
There was no specific aspect of the task which was difficult. If I had more time on the project, I would add more types of character animation and more examples of sprite animations to the project. Following at the end of this blog post is a neat discrimination between this task and the Maya based walk cycle. Stay tuned or go to the very bottom!

TITLE
Exploring setting of keyframes to create expressive animation.

VIDEO OF THE ANIMATION
Here it is.

TECHNIQUES USED/SUMMARY OF IMPLEMENTATION
The design of the walk cycle of Norman started with the analysis of the movements of the different body parts during a human walk cycle. Based on these movements, the poses of the walk cycle were designed and implemented in a suitable computer program. For this project the 3D computer animation program Autodesk Maya 2017 was used.
The Different Poses of a Walk Cycle A walk cycle can be described by four distinct poses, “key-poses’: CONTACT, RECOIL, PASSING and HIGH-POINT . After these steps were created they can be repeated over and over to create the walk cycle. This type of animation is called a pose-to-pose animation.
CONTACT: During normal walk movements, the first and the last position are called ”contact position”. In this position the legs are furthest apart. The heel of the front leg and the toe of the leg in the back are touching the ground. The two contact positions are the same but inverted. When the left leg is forward, the left arm goes back and vice versa. This is called “counter pose” and helps the character to keep its balance. The legs are in their most extreme position in the contact position.
RECOIL: The second position in the walk cycle is the recoil. This is where the character lifts the leg in the back and moves it up and forward. The recoil position is usually where 7 the characters body is at its lowest position. The bent leg in the front takes all the weight and the foot is placed right under the body to balance it. In this position the arms are furthest apart
PASSING: The passing position is in the middle of the walk cycle. Here the character has one leg straight and the other slightly lifted and bent. Because the leg is straight in the passing position, it is going to lift the body and head upwards.
HEIGHT-POINT: As the name says, this is the highest point in the walk cycle. In this position the foot is pushing off and lifts the body and head, before the other leg is thrown out and catches us in the contact position.
Timing and Key frames: There are many different ways of animating characters and objects, to bring them to life. One of the most common ways of animating is key frame animation. Time is corresponding to frames (e.g. 24 frames per second) and these key frames are used to define distinct postures and Autodesk Maya is then calculating the in-between frames. In a normal medium speed walking movement, a person takes about two steps per second. Each step takes about half a second.
Gravity: To be able to create a believable walk cycle it is important to add the impression of weight to it and to know where the weight should be placed. During a walk cycle the moment when a foot hits the ground is the most important moment where gravity should be noticeable. At this very moment, also the rest of the body should, to some extent, follow this downward movement that is stopped when the foot-ground contact is made. During this phase, typically the foot is moved downwards to the ground with a certain vertical speed, and when it hits the ground this vertical movement is rather abruptly stopped. Taking into account this aspect can increase the credibility of the character being immersed in its surroundings and avoids the impression that the character is floating or sliding.
I found the rig of norman here.
I used the tutorials provided by Udemy and Lynda to implement the walkcycle.

OBSERVATIONS
I found this task very challenging indeed. I had not used Maya prior. It took me a long time to get the hang of it. If I had more time on the project, I would add more types of walkcycles. I would do more expressionism. Next up is a neat discrimination between this task and the Sprite based walk cycle.

COMPARISON OF THE TWO TASKS
An important point of difference is that in the Maya based keyframing, we create and customize the keyframes. The Android app just renderes existing keyframes (sprites) and renders them at a specific rate. With Maya, I can edit as many keyframes as I want. With the application I made on Android, the features are limited. Basically, it is apples and oranges to compare the two tasks. They are not really related except for the fact that both implement animation using keyframes: one using in Maya, and the other via a tool to render images which are played at a fast rate. In essence, the sprite animation is more traditional

ROM naam satya hai

System.out.println(“Hello Internet”);

8580670_orig

ROM stands for Read Only Memory.

2768160_orig

Haha. But the story isn’t about your Read Only Memory chips. No, its not. Its about Android OSes.

9404240_orig

Allow me to explain. Suppose you buy a phone (or a tablet) with Android. The Android operating system that comes with the phone (just like how you get Windows 8 or something when you buy a laptop) is called a stock ROM. The stock ROM is a version of Android by Google (just like how Windows 8 is a version of Windows by Microsoft) coupled with some extra usual mumbo-jumbo by the the device (phone) manufacturer.

4355013_orig

Ah, then your friend must probably be using a Google phone (Nexus).

The Nexus phones (although manufactured by other vendors) have pure Android running in them without any of these extra “un-uninstallable” (sorry!) apps that consume the phone’s memory and battery charge. Of course, if yours isn’t a Google phone and you don’t appreciate these redundant apps, you can remove them if you root your phone. But more on that some other day.

2289433_orig

Whaaaaaaat? I wasn’t gonna talk about rooting but now you made me to. By rootingyou can access the “touch-me-not” areas of the phone software (in this case Android) which were restricted to you before. Its done to overcome a lot of existing limitations of your phone. Android is derived from Linux (another operating system running in laptops), which gives such administrative privileges.

3616636_orig

Moving on :D.

Without getting into the technical details, let me brief a bit about the benefits of rooting. Maybe I can dive into it further some other day if my blog survives or if we survive the apocalypse; whichever last!

Haha, that would be a miracle, wouldn’t it? My blog surviving the apocalypse. Especially if its similar to this one. 

7969085_orig

Sorry again mate but you definitely have to check that apocalyptic video.

So where were we? Yes, root. By rooting you can install certain “special” apps allowing you to do special things to your mobile like use more of its system resources efficiently. You can also do unethical stuff if you desire. Ahem. You can speed up and conserve battery. What next? Oh yes, you can automate stuff like setting alarms, toggling your cellular internet, GPS among others. Aha, you can block adds and stream smoothly. As mentioned earlier, you can uninstall the mumbo jumbo apps that came along with your stock ROM if you wish. And I haven’t even started on the best bit about rooting….

8317650_orig

FLASHING CUSTOM ROMs.. ta tada…!!

You can replace the android in your phone with another one; made by a custom ROM developer. “Installing” the other android into your phone is called flashing.

7898826_orig

For your information, there are people out there tweaking the stock Android and creating their own versions. Google allows you to do that under Open Source Licences. Some licences. Don’t bother about it. And why do people modify the stock ROMs? To make the phone better of course! That’s the whole point! How is it different from rooting?  Rooting is…

893140_orig

Fine let me bullet-point out.

* Rooting is providing the user extra access to the phone. It is done by flashing another slightly tweaked ROM into your phone. Its not a custom ROM because it is still pretty much the stock ROM but with extra permissions.
* Flashing a custom ROM means installing a version of Android that not just provides special access to your phone but also is very highly modified. Hence the word “mod” in the ROM world to refer a custom ROM.
* You don’t need root access to flash a custom ROM; though you’d need to unlock your bootloader. More on this some other post.

With this I think its fair to conclude right now. I know I have left a lot of areas untouched; I could’ve dived deeper but I’m getting late for meeting up with a friend.

If you were new to this, I hope this post helped and I suppose you’d be like….

9974512_orig

Various ways you can animate!

One of the most interesting guides to kick start on the art of Animation techniques that I found is “The Animator’s Survival Kit” by Richard Williams. It introduces and demonstrates one of the techniques called keyframing. It is based on the notion that an object has a beginning state and will be changing over time, in position, form, color, luminosity, or any other property, to some different final form. We then let the artist or a computer program to fill in the intermediate forms. As an example, check out Mike Brown’s simple method for creating walk cycles here. My views on this: (1) Very impressive considering all this was produced with just four keyframes. (2) Attention to detail given, especially with respect to the shoulders, hips and feet movement. You should also check out the stop motion method of keyframing by the movie makers of Kubo and the two Strings here. I was amazed by the creativity of the artists. I am still not sure if this is more or less hard work over using a computer program. But this technique does give a “natural” feel to aspects of lighting, shadows and materials. However I assume the characters’ facial expressions should be difficult to work on.

Procedural animation is a technique used to allow for a more diverse series of actions than could otherwise be created. Here, predefined animations are used following a set of “rules”. This could be used in crowd based animation, as done by the Game of Thrones folks. Check how they did it using Massive Software here. My views on this: (1) As I was not even aware of this technique, I was awestruck with this idea; the fact that a set of rules can result in terrific simulations. (2) I wonder if deep learning techniques are applied to the rules of procedural animation; I guess they are.

Physics based animation is simply how physics is simulated for human visual consumption. More on this in later posts.

Finally, we have data driven animation. Data driven animation using motion capture data has become a standard practice in character animation. Dense markers are placed on various points on the object (human for character animation) which help in “drawing out” parts of the body/skeleton/facial features like how they used in Avatar. My views on this: (1) I was aware of this form, especially from game development videos; a prime example being FIFA. I feel the technology is far from perfect here.  This is acceptable for game graphics, maybe, where the attention to detail in animation features need not be the primary issue. For movies, I feel there is room for improvement.(2) I wonder if this is limited to only human or human-like character animation.

All these and more emerging ways of animation (like autonomous animation) are combined in some sort to produce the jaw dropping realistic movie and gaming experience that we love.