A new milestone for the robot photographer! Here's a list of new things that the robot is capable of:
- It has been taught to detect and track humans in Kinect's depth image!
- Depth and photographic cameras have been aligned (camera intrinsics and extrinsics have been calibrated). This effectively means that given a point on one image plane (e.g. in the depth image), a robot is able to tell where the corresponding point would be on the other image plane (e.g. in a photograph).
- Composition and framing modules have been implemented, i.e. now the robot knows how a "good" picture looks like.
All in all, this means that the "beta" version of the robot is now fully functional. Here's a new video of it in action!
Next step: real world deployment!