MV3D Development Blog

January 27, 2008

More work for me!

Filed under: Uncategorized — SirGolan @ 5:03 pm

I keep making these tickets to do what I think will be fairly straight forward things, and then once I start getting into them, I realize that it’d be better to do a whole lot of refactoring and cleanup. The little ticket suddenly becomes a multi-week or multi-month project. I’m stuck on one of those now, actually.

I wanted to just go in and de-couple sub-servers from their names. Previously, you had a Server, and you could give it sub-servers that would handle things like accounts, assets, simulation, etc. There were a couple of problems with how that worked. First off, it ended up that in order to run the most simple MV3D server, you basically needed one of each. Secondly, there really wasn’t any way to use them as plugins. You couldn’t have an alternate implementation of an account server that you threw in. They all also heavily relied on the main server object. So, I thought I’d see about making them pluggable and less reliant on the main server. Sounds good, right?

As I got into it, I noticed that the implementation was pretty horrible and would need major changes (it was probably the oldest code in MV3D). One thing that I’ve been thinking about is reducing the reliance MV3D has on PB and allowing the possibility for it to speak other protocols. The main push behind this was the authentication scheme I mentioned in the last post. I was also seeing that the Client class had 90% of the same code as the Server class. The Client has notions of sub clients, which some of directly map to sub servers.

From all this came the Conductor. His job is to manage a set of services and keep them in sync. I still didn’t think this would be a huge thing to do. Just some search and replace and stuff. The problem came when I wanted to change the hard coded sub server names into something you could specify in a config file. See, things like the asset sub server always use the name “Asset Server.” I decided this was pretty silly and that you should be able to name them in case you wanted to have a couple types of asset services around. The main problem there is that when one service wants to talk to another, it used to be able to just look for a specific named service attached to the main server. Can’t do that any more. It all has to be in the config file. Not so bad. Then I started looking at how things get remote connections. The main server class handled that through a connection manager object. The purpose of the connection manager was to make sure that you didn’t open up a new connection every time you needed something. In many cases, you’ll already have an open PB connection and can just make use of it.

Only it was again some of the original MV3D code and is fairly badly implemented. I was also thinking about how multiple open network ports (speaking different protocols) would work. It wouldn’t, basically. So the connection manager had to go (it was dumb anyway). Everywhere in MV3D that tries to open a new PB connection is just looking to talk to a specific sub server (now called service). It would indeed be cool to be able to have some sort of service locater that basically pointed to a specific service on a specific box and included information about the protocol to use.

So, now, I’ve rearranged everything so that what was the connection manager split into two: a client and a server. The server side is now just another service you can add into the conductor. The client side follows an interface that can be used for other protocols as well as PB (connection manager only understood PB). So, now, you just tell the conductor to get you a service and give it the location like conductor.getService(”pb://mv3d.com/sim”). It’s pretty cool, and maybe 1000-2000 less lines. You can even specify local services as “self/auth”.

Unfortunately, now, I have to go through everywhere that used to open a new connection and change how it works. Nothing like making extra work for myself! However, I’m happy with this change, and I think it does a lot to make the inner workings of MV3D more flexible and easy to work with. That’s one of the big things I want to work on before releasing the code.

January 23, 2008

Respect my authorization!

Filed under: Uncategorized — SirGolan @ 12:27 am

A little while back, I noticed that the way logging in was handled on MV3D servers was rather terrible in the security department. It wasn’t completely horrible given the network design it was built for. Originally, MV3D was going to be only run on servers I controlled, so trusting the servers with a lot of things was more or less ok. However, now, the plan is that anyone can run a server and connect it in to the MV3D network. That means I suddenly can’t trust servers any more.

The initial way it worked was that you’d connect to a random server, give it your username and password and then it would either validate it locally, or fetch your account data from an Account Server (including password) and then validate. Clearly, that doesn’t even come close to cutting it any more. My next idea (and the ticket that got me started thinking about this) was to connect to a random server, give it your username and password, and have it send that to an Account Server. You may see that there’s a slight problem there, too. If you can’t trust the server you’re giving your username and password to, then what?

Try 3 is to send your username and password to an Account Server and get back an authorization token. You can then pass those tokens to servers you want to log in to. Those servers contact the Account Server to check the token and either boot you or let you in depending on what it says. That’s a lot better, but there are still some holes in it. Say, someone is running a server and wants to break in to accounts. All they’d have to do is keep working auth tokens and use them to log in to the person’s account.

Now what? I was thinking I could add some information into the auth token such as IP address, but, uh, that’s pretty easy to fake, and the evil server is going to know that info. My most recent idea is to add a login server into the mix that would be a trusted server and would proxy back to an account server to check passwords. The login server would have to be a trusted server. In fact, clients (which in this case could be other servers) should speak to it in SSL and require the user to validate certificates of new servers or something. That would leave it in the user’s (or server admin’s) hands to know if they could trust a login server. Possibly more ideas on this later. Just a quick note about who uses the Login Server. When one server talks to another, it needs to authenticate, and therefore should use the Login Server. Clients of course also need to use it. For simplicity, I’ll just refer to users of the Login Server as clients.

A client would contact a Login Server before contacting anything else and send over its username and a message containing some info about the server it wants to log in to encrypted with its password. The Login Server hands back an authentication token that basically consists of the username, the server info the client sent in, a timestamp, and some random data encrypted with a temporary encryption key that only the Login Server knows. The client can then disconnect from the Login Server.

The client then connects to the desired server and sends that authentication token to it (no username). The server connects to a Login Server and passes on the authentication token and the server’s info. The Login Server decrypts the token and verifies the data including making sure the date is recent enough. If everything matches up, the Login Server passes back the least amount of user info required for the server to do its thing. The temporary encryption key is then forgotten. The user is allowed access to the server.

Sounds pretty convoluted. Maybe I should just use kerberos or something. But either way, can anyone poke holes in my method? It is fairly similar to kerberos in some ways.

January 14, 2008

Wiitard

Filed under: Uncategorized — SirGolan @ 11:55 am

.. that’s what my wife keeps calling me for obsessing about the Wiimote. One of the big things I wanted to do with the wiimote is be able to estimate its position/rotation when it can’t see the IR LEDs. It isn’t possible to do this very accurately, but I was able to put together a program that does this. You can even use it with the nunchuk. It can’t figure out yaw at all due to there being no gravity in that direction to affect the accelerometers. The most important part of it is calibrating the sensors. I made a 6 direction calibration routine (since I’ve found that not only do I have to scale the measurements, but also gravity is oddly different depending on if the wiimote is facing up or down. So, you point the remote in different directions and it calculates the calibration. It is still a bit wiggly, and the position deteriorates fairly quickly, but I feel like it’s enough to get an idea. With the nunchuck, I’ll have to make it gravitate towards the center or something. There are also some more things I can do to make it more stable such as determining and then integrating the angular velocity. Basically, in order to know the linear acceleration, you need to know the orientation of the controller so you can remove the force of gravity from the readings. In order to know the orientation, you need to know the linear acceleration so you can remove it before calculating the direction of gravity. So, that’s basically the problem. You can’t know one without knowing the other.

If you make some assumptions, you can get semi-accurate data. That’s all I’m currently doing. I assume that when the reading on the accelerometers is close to 1g, you aren’t accelerating. The software then figures out the orientation of the wiimote and uses that for future linear acceleration readings. I suspect I can do some more accuracy by using other similar tricks or by using the jerk* to determine the expected next linear acceleration and assuming major differences from that are orientation changes or something like that.

Anyway, my test program uses pygame, which I’d never really used before. I started using it on the webcam IR tracking since the original version used it. It seems pretty useful for visualizing 2d things. But for 3d, it was Python-Ogre time.

I started a test app for actually using the wiimote in a 3d environment. Basically, there are a bunch of blocks laying around, and you have one that you control with the wiimote. You can use it to push the other blocks around or even pick them up and throw them. However, I find stacking them to be a fun challenge. There is force feedback involved, so when your virtual hand touches something, the wiimote vibrates. All in all, it’s a pretty good interface. I may want to add some more smoothing to the wiimote position because it can be a little tricky to hold it still. By the way, the camera in the wiimote seems to be 1024×768, and it tracks the dots at about 30fps I think. That makes things very responsive. I do wish that they had put a wide angle lens on it though. That would have been really nice. As it is, the FOV is fairly narrow, which lessens the coolness of the good resolution.

The next test was to add head tracking into the mix (previously, the camera was rather stationary). I decided to use FreeTrack to do that since there is an odd bug in the software I wrote that causes roll to not be calculated correctly. FreeTrack pretends to be TrackIR, which GlovePIE captures and then sends via OSC to my test app.

In my previous head tracking app, it assumed you had VR goggles. I don’t actually have those, so it’s fairly useless for it to work that way. One thing that is cool about Johnny Lee’s implementation was that it made it look like your monitor was a window. The Ogre guys have been trying to figure out exactly how to do this and I tried their code in my old head tracking app. It didn’t work very well. However, by messing with it a bit, I was able to get it to work fairly well for me. It’s still not quite as convincing as Johnny Lee’s version, but pretty close.

So now, I have a little app where you can use the wiimote like a virtual hand, move around via the joystick on the nunchuk, and finally using the head tracking on a stationary monitor. Pretty cool.

My main reason for doing all this is to come up with a good interface for MV3D’s in game editor that uses these techniques. Granted, there will have to be a fallback for pretty much everyone else since they won’t have a similar setup. I’m still trying to figure out what I can use the movement/orientation of the nunchuk for.

I’m vaguely thinking that two more wiimotes would be good. One to take the place of the IR camera since the wiimote’s IR camera is so much better. The other would be for a second hand. The only problem is that it’s nice to have the analog joystick from the nunchuk.

* jerk: Not just some annoying person any more.

January 8, 2008

Wiiiiiiiimote

Filed under: Uncategorized — SirGolan @ 9:44 pm

My bluetooth adapter came in today, so I’ve got lots to say now that I’ve actually used the wiimote. My original plan to go from wiimote to Python was to use GlovePIE to redirect the controls to PPJoy (a joystick emulator), and then just read the virtual joystick using Ogre’s input system or pygame.

Getting the wiimote to talk to my computer was a piece of cake. Installed the bluetooth adapter, paired it with the wiimote, opened GlovePIE and made a simple script that turned on rumble whenever I hit the B button. No problems there. I also got it to work as a mouse pointer by turning on two of the 3 LEDs in my IR beacon. The next challenge was getting data into Python. There are a few libs that “support” the wiimote in Python, but as far as I can tell, none of them support nearly as many features as GlovePIE. So, I installed PPJoy but there was no joy to be had there. It doesn’t support XP x64. Suck.

Now what? Remember I mentioned the possibility of letting MV3D run some network protocol that would send user input data? Well, GlovePIE just happens to support the OSC (Open Sound Control) Protocol. There was a Python implementation, but it was not async. There seemed to be a Twisted implementation out there, but it looked to be part of a bigger app. So I wrote my own Twisted OSC client and server. Yay for Twisted making things easy for me. If anyone wants the code, just ask. It only supports the basics of what I needed to talk back and forth with GlovePIE. Anyway, it works, and I made a Python app that did the same as my initial GlovePIE script (turning on rumble when you hit a button). Now I have access to the complete wiimote functionality in Python. Yay!

So, next up is to get it working in a Python Ogre + ODE app so that I can play with some virtual cubes and spheres and stuff. I have some interesting ideas for combining the IR tracking and the accelerometers so that I can maybe have a few strategically placed IR LEDs that would allow me to point the wiimote in pretty much any direction and have it either know its position, or able to estimate it fairly well. However, I’ve bought all the IR LEDs at the local Radio Shacks, and I don’t think they plan on restocking them, so I may have to order some online (which would be cheaper anyway).

January 7, 2008

Persistent obsessions.

Filed under: Uncategorized — SirGolan @ 10:14 pm

Phew! I think I finally have persistence worked out in MV3D. I’m quite happy with it. I went back to my datastore method (yes, I saw that cringe), but I feel like I’ve made it quite a bit better. One of the reasons I couldn’t say no to it is that it’s lightning fast. All the other methods I’ve tried (all of which involved a database of some sort) don’t even compare in speed except for Axiom which is actually faster than datastore if using a transaction. One of the major issues with it before was that you never knew what you were saving since it was basically a fancy layer on top of pickle. Now, however, it only stores what you want to store. It also supports upgrading and downgrading stored objects on the fly (hah the original C++ MV3D persistence mechanism did this too). It allows you to do queries (no joins yet) extremely fast, and its indexes aren’t even optimized. Although, I will say, I haven’t tested it with a mega huge database / index yet. Maybe I’ll do that before moving on.

Every service that needs it has its own store right now. Most of them just save the objects directly to disk when they’re changed. However, for the simulation service (a.k.a destroyer of persistence mechanisms), it stores to memory and then writes it to disk in the background. One problem with the other persistence mechanisms I’ve used was that for the sim service, the data saved to disk needs to be a snapshot in time of all the objects the server is simulating. The other methods I’ve used would take 20 seconds or so to save a couple of hundred objects. So, I’d have to do it asynchronously and then the first object’s position it saved was 20 seconds behind the last one. What I do now is save all the objects to memory synchronously and then save that to disk. It does get a little chunky when there’s 2000 objects on the server, but there’s a bunch of optimizations I can make including storing objects in batches based on the area they’re in. It takes around 1 second to store 2000 objects and about 15 seconds to store that data to disk.

One thing I’m noticing is that the new data store is very good to my data. The old version would lose your data if you stopped the program at the wrong time. This one keeps itself up to date, and the store that the sim server uses keeps backups (since it uses a monolithic file).

In any case, this closes the book on persistence finally! I’m very relieved. For a while, I was starting to think persisting a very open ended simulation wasn’t possible. I’m happy to say that it is. This is one big step towards a rumor I’ve been hearing around that an open source MV3D release would be coming up. There are still some other things I feel need to be done, such as bringing back the grass and trees, fixing bugs, documentation, and finally, figuring out some business related things.

Me and my obsessions, though. I’ve been completely obsessed with virtual reality lately. I don’t know why, either. It’s definitely related to MV3D since MV3D is a virtual world simulator, and it started with the videos from this guy. Anyway, yes, it’s pretty silly, but I still think it’s cool. So, I made my IR webcam 6dof head tracker, an extra 6dof IR tracking beacon for the wiimote I just bought, and I would have made a couple more tracking beacons if Radio Shack had bothered to label their LEDs with what they wanted for current / voltage (oops :) ). And yes, the normal wiimote sensor bar isn’t 6dof, it’s only 4 (x,y,z, roll). Added to the accelerometers in there, you get 5dof. I think you still can’t measure yaw, but my bluetooth adapter doesn’t come in until tomorrow, so I can’t say for sure. Basically, the beacon I made is a triangle instead of a bar. Using Alter’s algorithm, I can figure out the position and rotation of the beacon just like the head tracking.

The best part was going into Game Stop to buy the wiimote. First off, the cashier was like 12, and the manager couldn’t have been more than 16. hah. But then they try to sell me on a Wii game when I said I wanted the remote. I told them I didn’t have a Wii. This confused them greatly until I explained I was going to use the wiimote on my PC. And that just confused them more. Then they tried to sell me a sensor bar, to which I of course replied that I made one myself that was more accurate. Seriously, $20 for a few IR Leds in a box? Even at Radio Shack’s insane $2/LED price, that’s crazy. Anyway, so that apparently impressed them somewhat, but didn’t stop them from trying to sell me some random PC game.

In any case, now I need to make MV3D support alternate input methods. I’m considering writing a simple udp client/server (of which the client could live in MV3D’s client) for getting the data in there. That is unless twisted has a nonblocking USB Webcam API (joking). Basically, a secondary app will do the 3d positioning for the head tracking (including getting images from the webcam) and also for the wiimote. All that’s left is to convince someone to buy me this or this. Did I mention that I’ve been obsessed with VR? I don’t really play computer games, so it’s not like it’ll “make my game better” in a flight sim or the latest FPS. I just want the set up for MV3D, and I know I’m being silly because MV3D is pretty boring right now.

All silliness aside, one real use I can see for the wiimote and such is that I should be able to create some fairly kickass content creation tools. I can’t tell you how many times when building MV3D’s world editing tools that I’ve been very frustrated by the lack of a Z axis on the mouse (or by it being a crappy scroll wheel). It would be very nice if I could place objects in the world with the wiimote, and especially nice if I could get a little force feedback for when the object you were manipulating touched something. That would make it easier to line things up. This is pretty much the only productive use I can think of for the wiimote and head tracking, but it’s a good one since now that persisting is done, I can actually start building a world or two. Some fixing of world editing tools required.

One fun non productive use I could make would be to allow sending the position of your “hand” back to the server and then making it a physical extension of your PC’s body so that you could punch things or grab them just like that really crappy (but technologically advanced) game, Trespasser. Come on, you’ve got to admit the best part of that game was that your health meter was a tatoo on the main character’s boobs.

Powered by WordPress