I see three main ways do to what you want:
- convert the depth image provided by kinect into a laser scan with depthimage_to_laserscan and get the smallest range
- process directly the depth image, for example looking for the lowest intensity pixel. For best performance, use a computer vision library as OpenCV
- process directly the pointcloud that kinect can also provide, possibly using PCL
An example for way 3 is the turtlebot_follower: doesn't look for the closest obstacle, but for any significatively big obstacle in front of the robot (and follows it). But probably use PCL is a bit overkilling for simple obstacle avoidance....
An example for way 2 is.... the same! I have seem @Tully Foote has rewritten turtlebot_follower in kinetic to use depth image procesed with OpenCV (not 100% sure) instead of PCL
That said, your question is rather vague... can you provide some more details about your plans?