I now have 3d head tracking working in Python-Ogre. It is very cool, albeit a little jittery due to the graininess of the webcam images I think. I ended up finding an implementation of Alter’s algorithm for 3d pose estimation along with POSIT. They were in Pascal of all languages. I didn’t think people still used that one– I certainly haven’t in 15 years maybe. Alter’s was easier to use and seems to give pretty good results. I get about 30 fps with Python doing the image processing (identification of where the IR LEDs are on the image) and 3d pose determination.
The effect is really nice. I can even spread it out over two of my 3 monitors. The third is unfortunately run by a separate graphics card, so Ogre won’t render to both at the same time. Maybe some hacking would fix it, but in the other room is my video projector if I want to get crazy..
I’m debating on using the second camera to track a stick of some sort. I have a very amusing magic wand from my LARP days that is just begging to become an input device. That would be cool because I could add physics into the demo and knock things around with the wand.
A cool other use I found for this setup is never having to lose my mouse again. I have three screens on my main system, two on my old computer, and one on my laptop. All of which are connected by synergy. So, I find myself losing my mouse pointer fairly often. While playing with some open source head tracking software, I had it controlling the mouse and found that the mouse was always on the monitor I was looking at. Speaking of that tracking software, even though the result of my tracking is waaay more jittery, it felt a lot more realistic than the other one.
I should really get back to MV3D though, but let me tell you, the ticket for adding mouse-look is getting upped in priority.
I finished up integrating Axiom into MV3D and had most of it actually working. Fitting it in was pretty hard, and in the end, it turns out it actually won’t work for the main simulation part of things. It’d be fine for accounts, realms, directories, and asset descriptions– all things that are fairly static. In any case, I’m taking a break to decide how to proceed.
Instead, I’ve been working on something all together different. There have been some videos going around on how to use the wiimote for various things like head tracking, a virtual whiteboard, etc. I’d been sort of thinking of making a wiimote and writing some software to do the tracking, and then saw this. It is in Python, and it was easy for me to get working with my webcam and stuff. But looking at the code, it needed some help, and only single point tracking was working with a reasonable (>4fps) framerate. No problem. I started over on the tracking code and have multipoint 2d tracking working at about 50fps (faster than my webcam can dish out frames). I added dual webcams (crappy, but who cares for this use) from radioshack for $15, along with an assortment of IR LEDs and a valiant floppy disk that gave its life to become a !IR filter. So far, I’ve only gotten two point tracking (like the wiimote’s sensor bar), but 3 point is just a little bit of trig away.
Anyway, I have no idea what I’d ever use this for, but it’s fun to play around with– especially hooking it up to Ogre for 3D tracking.
MV3D is getting a little big. It’s about 65kloc now in 230 files, though I suppose some of that is cruft that I’ll be pruning sooner or later. There’s over 300 unit tests, but to have good coverage, it probably needs another 300.
I haven’t updated in a while, mostly because there hasn’t been anything interesting to update about. I’m still working with Axiom. It’s not going badly, but there’s a lot of work to do. I’m also doing a lot of refactoring and adding unit tests as I go along, so that’s part of the reason it is taking so long. I’ve finished all the server plugins except the simulation one. So, right now, I’m going through all the various objects and areas and making them work with Axiom.
After 3 or 4 failed attempts at integrating Axiom into MV3D, I’ve finally got it working in a pretty good way I think. The only real issue I have currently is that my Items get pretty messy. As far as I can tell, if a parent class creates a member variable, in the child class (which is also a subclass of Item), you need to re-define that member variable as an axiom.attribute. This is a little annoying, because you’d expect to be able to define those member variables as axiom.attributes in the parent classes, but Axiom doesn’t like that. I found a way to hack it if you aren’t planning on persisting that variable, but it’s so messy that I prefer just listing out all the parent classes’ member variables in each child class. I’ve got the Account Service and Asset Service transformed to use Axiom (including all the types of Assets and AssetGroup). Each class has about 30 extra lines of defining parent member variables, but as far as I can tell, that’s the only way to do it.
Overall, I’m happy with the results, though. Next on the list is probably Directory Service, and then Realm Service. Realm Service will be fairly interesting because that’s where I’ll start to see physics stuff that needs to be initialized on load. The most fun will be with Simulation Service because not only is there physics stuff there, but the physics stuff can’t happen until the required Realm is loaded and its physics initialized.
I’m trying to decide if it’s worth it to partition up items in the simulation service. That way, you could have multiple stores and maybe use a binary tree based on the item ID to determine what store it goes in. Actually, directory service could really use this since it holds a listing of ALL realms / asset groups. Or maybe, the partitioning should be on which directory service you get your info from. Will have to look into that.