Archive for the 'graphics' Category

Canvas Experiment: VRorschach

Tuesday, April 17th, 2012

VRorschach is a silly HTML5 Canvas experiment I did a year ago. You can try it clicking the image.
It generates procedural symmetrical shapes and allow you to name them and share them.

The algorithm to create the shapes is very random, it creates closed splines with random x,y values around an area, and keeps shrinking the area with every polygon. I use a seeded random function so people can see the same results using the same seed.

It was fun to see what people saw.

I should try to find a better algorithm, I’m pretty sure I can make better looking shapes using something less random.

Ludum Dare 22: Alone in the city

Monday, April 16th, 2012

Me and my girlfriend participated on the Ludum Dare 22 competition about making a game in 48hours. After the total failure of the GGJ11 we wanted to make something more simple. The theme was simple: ALONE

I wrote a post-mortem about the game on the official blog, you can read it here.

(more…)

Resurrecting the Blog

Monday, April 16th, 2012

I loose count on how many times I’ve trying to resurrect my blog.
I wanted to do a propper devlog like all those amazing blogs I’m suscribed (I should make a post about them), but when I’m very active at coding I do not feel very communicative to write about it, so here we go again.

I plan to write at least once a week to explain how some of my projects are going and to share all those annoying bugs or problems I had discovered and which solutions did I use.

And as I always do, I cleaned my wordpress theme so at least it will help me find the strength to write.

I did a small WebGL widget to make it more interesting, I plan to do more silly 3d widgets over the time.
This one is tiny in code, and it shouldnt take too much resources (it only gets executed when the tab is visible).
I used the library lightgl to simplify the code.
Lightgl is a tiny javascript library to wrap some of the WebGL funcionalities in a more friendly code. It helps to upload textures and meshes, compile shaders and it comes with the mathematical operations common to 3D.

Check the code to make the example above

(more…)

.OBJ to JSON python script

Sunday, June 26th, 2011

I have developed a small script in python to convert OBJs to JSON, well suited for WebGL rendering.
It is not very optimized, and it only support OBJs with one mesh with UVs and Normals.
It computes the bounding box and clamp to 3 decimals all the values to avoid some ugly values.
It support indexed meshes and coordinates swap for 3ds MAX exported meshes.

The code after the jump.
(more…)

WebGL: Day 1

Thursday, July 8th, 2010

I like to code in OpenGL, andI like to code javascript, and now WebGL allows me to do both things at the same time!.

WebGL is a Javascript binding to access all the OpenGL ES features right from the browser. The idea is not bad, it allows to create true hardware accelerated 3D apps embeded on the sites, and it blends perfectly with the rest of web technologies (like HTML+CSS, Ajax, etc).

Unfortunately WebGL is not mature enough and It is kind of hard to start even for a experienced programmer.

The first big problem I found is that Javascript was never meant to be used with binary data, and 3D graphics needs binary data to save all the information (textures, meshes). WebGL can do the ugly work to convert from regular non-typed arrays to low-level streams, but sometimes it is confusing, or just it looks slow, and hard to optimize, but if we agree that a 3d app in a browser is not meant to have Crysis quality we can keep going.

Usually working with OpenGL from scratch tends to be annoying, because there are lots of actions you have to do to create and transform simple things like meshes or textures, things that usually are wrapped in classes to speed up the developing process, and here we have the non-binary data issue that can make the wrapping harder.

For starters all the geometry calculations (projections, transformations, vector operations, etc) should be coded from scratch because JS libraries tend to be focus on HTML and web interaction, no in 3D. Now with WebGL some libraries are emerging for 3D calculations but they do the calculations in JS, which is slow for intensive computation.

Also JS doesnt support operands so you can do V1 + V2, you end up with sum(V1,V2) which is annoying for long formulas.

So thats when you realize you are touching the limits of the technology, JS was never meant for this and WebGL wont solve this issues, it is just a wrapper of a library.

Anyway, I decided to switch to a framework build uppon WebGL, because working straight to WebGL is slow and tedious.

So I chose SpiderGL as that middle-ware, and it looks nice but it is not documented at all!!, gaaagh. I have to read the source code to discover most of its usefull features, and found some bugs too…

But after sorting all the problems, it works, I have coded several shaders, do PostFX in scenes and some intensive mathmatics and it looks nice in a browser.

Sadly I havent figured out how to export meshes in a binary format…

Anyway, If I have time I will upload some of the examples I coded, but remember that you need a WebGL capable browser.

Testing NVIDIA 3D Vision (Stereographic gaming)

Sunday, December 20th, 2009

I spend this weekend in my parents house, where my brother lives. This won’t have anything to do with the topic of the post if it wasnt that my brother is 35 and have a well paid job, under this circunstances he likes to waste the money in all kind of gadgets, even those who will never use, just for the pleassure of showing off in front of the people.

Sometime ago he decided to purchase a nice pack from NVidia that bundle a monitor, 3d card and stereographic glasses. They sell it as the next experience in videogames, and I wanted to know if this is really an improvement.

First of all I have to battle against the driver, using the latest it didnt work. The “NVidia Stereo Controller” driver was not found, I had to download a previous driver and install just the USB driver item (I document this beacuse maybe somebody find this useful).

Once it worked I tested some demos that look nice, but I wanted to see it on games, and I did, and I have some kind of contradictory impressions:

The technology itself is nothing new, the glasses are just LCD synchronized using a IR flashing device connected through USB. You know how it works, the computer shows the frame for the left eye on the screen and syncronize the glasses to block the right eye, and switches fast enough so the perception is that both eyes are watching different images.

This technology was easy with previous CRT Monitors, but TFTs can’t switch images so fast, so with current TFT monitors the other eye still can see the image that was rendered for the first eye.

This is the reason why this pack comes with a monitor, this monitor (Samsung) is able to work at 120Hz so there is no problem to work with the active glasses.

Amd about the glasses, I was hoping some improvement but no, all the glasses that uses LCDs have the same problems, and this ones are not an exception.

First, the brightness. Your eyes are watching half of the frames, and when they are not supposed to see the screen they are covered with a dark screen, that means that the perceived brightness is half of the monitor brightness, and you feel it, when you are used to bright monitors this is like playing with your brightness set to 50%. Annoying. Of course they could create special monitors with the double of brightness, but that brings us to the next problem:

Ghosting. When working with LCD glasses you have to ensure that the darkened eye won’t see anything on the screen, otherwise the user sees strange objects floating on the screen that reduces the stereographic sensation.

So where is the great improvement here? Well, it is not a hardware improvement, it is more a software improvement, and the improvement is that the game do not need to be coded to work in 3D, the driver is able to do it by itself.

Thats a great step forward, and I can tell you that it is not easy. Because the driver somehow has to understand all the steps during the rendering process and it has to determine which parts needs to be redone, readjust the camera position and render the frame for the other eye. And it works!, but not perfectly, because the rendering pipelines are composed of lots of steps and a driver it is not able to understand them fully. Thats why DirectX has some specific features meant to take advantage of this technology.

Those games developed to be used in 3D will work perfectly, the others probably will show horrible gliches, or will have inter-ocular distances that will make your brain explode. Because thats another big point, when working with stereographic you have to set the distance between the cameras used for every eye, and the focus point distance, if you don’t set these distances right, the sensation is annoying or you just loose the 3D effect. For instance, I’ve been testing Colin McRae Dirt 2, the game looks amazing in 3D, the speed feeling is awesome, but you can’t play it using the inside-car camera, because the intra-ocular distance is too big.

I guess that what the driver uses to determine the distance is probably something like ( (far_plane – near_plane) / 2 ) and as I said it work, but not under all circunstances. Another game I tested is TorchLight, in this game I couldnt find any glich (and the game is not meant to work with 3DVision) but the only annoying thing was the cursor, which floated on the screen instead of being at the ground level.

So at the end what is my impression? It is hard to say. It really looks 3D, it really improves the gaming experience, but at the same time you feel that this is some kind of natural evolution in games, and you are used to play in 2D that you really don’t bother if there is no depth perception. I think that this technology needs to walk hand to hand with the headtracking technology, otherwise is some kind of expensive eye-candy.