HIKVision DS-2CD2032-I

For a £80 I managed to get a new HikVision DS-2CD2032-I camera which supports a crazy 2048 x 1536 @ 20 FPS!

image

Note my version turned out to be a Chinese import, but the firmware is the latest V5.1.0 build 131202. This probably means that future upgrades could be difficult, but it was a good price.

The camera can mount NFS as well as SMB/CIFS file shares, although a few people have had issues. And here is where my issues started (I did get it working). The camera management page has a section for NAS, and you can select you server, share, CIFS as well as credentials.

image

Once set you can go to the storage management page and see that the storage appears offline (no errors) see #10 (#09 is my working version)

image

First step is to enable Telent and login to the camera itself, and rather cool it is running busybox! Great!

image

#Dmesg informed me that the user account was not correct, (it is) and when I look on my windows machine (Win Home server) I see that there too it says the user is incorrect. So it must be the domain, but here is a problem the username cannot contain \ or /

image

The simple fix was to use the format user@domain.local rather than “domain\user” since @ is permitted.

image

Now that I can see the storage, it needs to be formatted?? This is very misleading as I have only mounted a shared folder on windows, what could possibly need formatting. Hitting the ‘Format’ button works away and fills up the folder with files. Being paranoid and not wanting to wait for it to fill my 500Gb drive, I turned on file quotas for that user. Click on the drive in windows, select quota and set it for a particular user. It only works at the user level and not groups. (I set it to 20GB)

After rebooting the camera, it can see my storage and it also knows that there is 20GB available. Fantastic. Hitting format created a “datadir0” folder and inside that there were 80+ files (mostly 0 bytes)

image

Having set motion detection and waiting a while, I can see that the files begin to fill up. But they often end up being the same size, I was hoping for a single file for every motion event… so I don’t have to log into the camera to see the footage.

image

None of the file properties are set on the recorded streams

image

However VLC gives me all the information I need to know

image

I have yet to play around with what goes into these files and why they are the same size, but this all looks promising.

I assuem that it creates an empty file for every 0.25 GB of storage that you have allocated (20GB x 4 = 80 files) and then fills up each file with a series of motion detections. This could be an issue for playback… buy I only have a camera and a file share, no need for a NVR

Some issues:

  • Dmesg keeps reporting that there is an issue with the CIFS mounting

image

Raspberry Pi video at 30fps using Camera module

We have been playing quite a bit with the new Raspberry Pi, a previous article covered video recording using a USB Webcam and a neat little piece of software called Motion. If you haven’t seen that yet, have a look at http://sjj.azurewebsites.net/?p=701.

I’ve now managed to get my hands on the Raspberry Pi Camera Module. It’s a really nice piece of hardware that costs around £25 and they claim you should be able to get 30fps at HD resolutions. You also get a NoIR (No InfraRed) version for recording in low light conditions.

We’ll attempt to get some motion detection done using this module and hopefully attain some high resolution high fps video.

A couple of things. As usual, I’ll assume you have some basic understanding of Linux terminal commands. Also, if you want to control the Rpi using SSH from windows, which is what I’ll be doing, use PuTTY (see http://www.chiark.greenend.org.uk/~sgtatham/putty/).

Right, let’s get started. First things first-make sure your Rpi is up-to-date by:

sudo apt-get update
sudo apt-get upgrade

Next, you need to plug in the camera module via the CSI connector.

blog
Figure 1: Connected Rpi camera module. Note this is the NoIR model.

We now need to make sure the camera is actually enabled. Type:

sudo raspi-config

A menu will open up. Go to 5-Enable Camera and make sure Enable is selected. Your Rpi will reboot once this is done.

To test if your camera is working, type the following in the terminal:

raspivid -g

You should see the camera window open up and display some output for around 5 seconds. Notice the brilliant quality, frame rate and resolution.

Now, we can make use of motion to achieve what we’re trying to do. One of the issues we hit was that motion on it’s own is unable to read data off the CSI connector, hence cannot use the camera. One workaround is to use motion-mmal (an alternative that uses mmal to read data off the camera). However, we face the same restriction with this-the highest frame rates we could get at 720p (1280 x 720) was around 3-4fps.

Instead, we use a combination of OpenCV and mmal. OpenCV (see http://opencv.org/) is a library that allows users to manipulate image/video data and output useful information from it. It can do complicated tasks such as face recognition and is used in a variety of industries. OpenCV is able to read data from the camera at around 15fps. This is fairly adequate quality for recorded video. What we do is use OpenCV for the motion detection, and once motion is detected, we will use the mmal libraries (essentially like the raspivid command above) to record video. This means that we will be able to get high frame video without compromising on quality and resolution.

I make use of a github project written by sodnpoo (http://www.sodnpoo.com/) which makes things quite easy. The project is located at: https://github.com/sodnpoo/rpi-mmal-opencv-modetect.

1. Install the necessary prerequisites.

sudo apt-get install cmake libopencv-dev

2. Create a directory to place the files in and go to the directory.

mkdir -p /home/pi/src/raspberrypi
cd /home/pi/src/raspberrypi

3. Build the prerequisite libraries.

make -C /opt/vc/src/hello_pi/libs/vgfont

4. Download the files and unzip.

wget https://github.com/sodnpoo/rpi-mmal-opencv-modetect/archive/master.zip
unzip master.zip

5. Create and go to the build folder.

mkdir build
cd build

6. Build the project.

cmake ../rpi-mmal-opencv-modetect-master
make

Done! You’ve successfully set up everything that’s required. Now to test it, run:

./mmal_opencv_modect > video.h264 2> video.log

You can alter video.h264 for any other name that you want. If you want it stored in any other directory make sure you give the absolute path. 2> video.log writes a log of the video being recorded. Use Cntrl+C to stop recording.

Note: This script should be run as is above only if you are in the build directory. If not, the absolute path must be specified.

Tip: Since video files can get quite chunky quite quickly, why don’t you write the files to your network share. If you still haven’t set up your network share, look at the RPi+USB Camera article (http://sjj.azurewebsites.net/?p=701). Then simply specify to write the file to that location.

Blog-SB-6
Figure 2: My build directory showing the video file I just recorded, with the log.

Links: These are just some of the references I used.

Article by Umang Rajdev

http://www.raspberrypi.org/forums/viewtopic.php?f=43&t=44982
http://www.raspberrypi.org/forums/viewtopic.php?f=43&t=76414
http://www.sodnpoo.com/posts.xml/raspberrypi_camera_with_opencv_motion_detection_and_recording.xml

Setting up Microsoft Azure Service Bus on the Raspberry Pi

The concept of the Internet of Things revolves around connecting everyday (household) objects to the internet. The important thing to consider is that all this data is being generated, but is only useful if something is being done with it. The Microsoft Azure Service bus allows you to process queues of such data and make it more useful.

In the previous post I showed you how I utilised a USB Webcam and a Raspberry Pi as a motion detection device. The next step in this process is to connect the Raspberry Pi to the Microsoft Azure Service Bus. Ordinarily you would use a Windows PC with Visual Studio and C#, but in this case we want to use Python since this is easily supported on the Rpi. We will utilise Qpid Proton which is a messaging library that has support for a Python API.

Once again, I’ll assume that you have a basic understanding of Linux terminal commands, and will be controlling the Rpi via PuTTY (SSH Client, see http://www.chiark.greenend.org.uk/~sgtatham/putty/). I have also previously set up a Service Bus queue, which I will use to send and receive sample messages.

The first step is to install Qpid Proton:

wget http://mirror.catn.com/pub/apache/qpid/proton/0.7/qpid-proton-0.7.tar.gz

Then unpack it:

tar zxfv qpid-proton-0.7.tar.gz

Navigate to the directory, and display it’s contents:

cd qpid-proton-0.7/
ls-l

Blog-SB-1

Figure 1: Contents of the qpid-proton directory

The first thing to logically do is read the README file.

cat README

Blog-SB-2

Figure 2: Contents of the README file

Figure 2 above shows the steps needed to build the API. Note that it says yum but we will use apt-get as that is the package manager for Ubuntu. Due to this, we will also have to look for the appropriate library names for the package manager. It is also worth noting that some of the packages are already installed by default so we shall not install them. We will also skip installation of files needed for Ruby and Java as we are only interested in using Python.

sudo apt-get install cmake
sudo apt-get install uuid-dev
sudo apt-get install openssl-dev
sudo apt-get install swig
sudo apt-get install python-dev

Once this is done, we will create the build directory for cmake and change the working directory to it.

mkdir build
cd build

Now run cmake, as specified in the README file.

cmake .. -DCMAKE_INSTALL_PREFIX=/usr -DSYSINSTALL_BINDINGS=ON

As it builds, check that no errors are spotted, and that it finds the required files. The next step is to run:

sudo make install

This will take a couple of minutes to run as it compiles the necessary files.

After this step is completed, we are ready to test the new set up. There are a couple of example python scripts provided to allow you to send and receive messages. Let’s first go to the correct directory:

cd ..
cd examples/messenger/py

Blog-SB-3

Figure 3: Contents of the Python examples directory

We will first attempt to send a file. Open the send.py file by:

sudo nano send.py

The only change I make is change the default address from amqp://0.0.0.0 to the address of my previously set up Service Bus queue. Save and close the file.

Now we send a sample message “TestingforRpiblog1” using the send.py script by:

python send.py TestingforRpiblog1

I get a confirmation message: “sent: TestingforRpiblog1”. Success!

Now to receive the message. Open recv.py:

sudo nano recv.py

Again, the only change I make is altering the address from the default amqp://0.0.0.0  to the address of my working Service Bus queue. Save and close the file.

Now run recv.py:

python recv.py

Blog-SB-4

Figure 4: Sent and received the “TestingforRpiblog1” message

You should receive the message alongside some other information. Once the message is received, you will also run into an error-this is most likely since the receive file has reached the end of the queue and there are no more messages to receive. We won’t worry too much about this right now. Use ‘Cntrl+C’ to end the process.

If you run recv.py again, you should see that you receive the message again. This means that the message has not been taken out of the queue on the Service Bus. We need to make a few changes to ensure this is done correctly.

Open up the recv.py file again:

sudo nano recv.py

We will make 2 changes:

1. Add the following 2 lines after ‘mng.start ()’ :’’

mng.incoming_window = 1024
mng.outgoing_window = 1024

2. Change the commands under ‘try:’ to the following:

tkr=mng.get(msg)
      print tkr
      mng.accept(tkr)

Save and close the file. Check that this works by receiving the message again. you should receive the same “TestingforRpiblog1” message, and have no errors. If you try again after a minute or so, it should not receive the same message again. Test a couple of times by sending and receiving messages, ensuring that no messages are received twice.

Blog-SB-5

Figure 5: Sent and received test messages

Congratulations! That completed the set-up required to run the Service Bus from your Raspberry Pi. There are several other things you can do from here, such as send properties, carry out asynchronous sending and receiving, etc. Experiment and see how far you can get your devices on the Internet of Things!

Article by: Umang Rajdev

External Links/Further Reading: These are some of the links I referred to while carrying out this project.

http://qpid.apache.org/proton/
http://qpid.apache.org/releases/qpid-proton-0.4/protocol-engine/python/api/proton-pysrc.html
http://qpid.apache.org/releases/qpid-proton-0.7/protocol-engine/python/api/proton.Messenger-class.html
http://osdir.com/ml/users-qpid.apache.org/2013-12/msg00082.html

Motion detection using the Raspberry Pi + USB Webcam

I just managed to get my hands on the Raspberry Pi (Model B+), so now i have a spare Model B, so what to build? I also have a USB Webcam (Trust Spotlight Webcam), so a DIY CCTV sounds like a great start. Turns out there’s several different applications to using the Rpi for home automation and other similar tasks. One of the most exciting projects is to use the camera for motion detection record video/images when motion is triggered. This blog posts provides a step by step guide on how I set up my PI and cameras, well as a live webcam server, and couple of other interesting things.

IMG_20140804_134740358
Figure 1: Raspberry Pi Model B

IMG_20140804_134745129nnn
Figure 2: Trust USB Webcam

I’ll assume that you already have an Rpi working (if not, help on doing this can be got from http://www.raspberrypi.org/help/noobs-setup/). I’ll also be assuming that you have a basic understanding of Linux terminal commands.

I shall be controlling the Rpi using PuTTY on windows (SSH Client, download from http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html). This just allows me to control the Rpi using it’s terminal from a windows machine. There’s nothing to prevent you from using this guide from an actually Rpi, in-fact that would actually be simpler.

Boot up the Rpi. The first step is to ensure that the OS is up-to-date. In the terminal (for those of you using the desktop version, open up a terminal first) type in the following lines and hit enter:

sudo apt-get update
sudo apt-get upgrade

We shall now install motion, which is the motion detection software that we shall utilise.

sudo apt-get install motion

Motion essentially utilises a .conf (.conf for configuration) file that supplies it with the parameters and settings that dictate the motion detection and the corresponding output/response. We shall now edit this .conf file. In the command window type:

sudo nano /etc/motion/motion.conf

You will quickly realise that this is a large file with over 600 lines of text for several settings. Feel free to read through. The file itself is quite well organised, with sections for different blocks of properties such as images, video, etc. They also provide us with descriptions on what each property does, which is quite handy.

For now, we shall focus on a couple of things:

daemon on #default off (This allows the motion to run in the background)
framerate 30 #default 2 (increased framerate)
width 640 #default 320 (changed width to match that of the webcam)
height 480 #default 240 (same as above but for height)
threshold 2000 #default 1500 (*explained in detail below)
pre_capture 2 #default 0 (captures 2 frames before motion was detected and adds that to the videos to make them smoother)
post_capture 5 #default 0 (same as above but captures frames after)
output_normal off #default on (this disables storing images, since we only require video)
ffmpeg_video_codec msmpeg4 #default swf (msmpeg4 is accepted by windows media player, hence easier to play on Windows)
target_dir /mnt/motionvideos #default /tmp/motion (changed the directory where videos will be stored)
webcam_maxrate 5 #default 1 (increase the max framerate on lie stream)
webcam_localhost off #default on (allows you to set up a live stream of the webcam)

*threshold: This property is essentially how motion operates. It compares 2 frames and if more than x pixels have changed, it concludes that motion has occurred. The value of x is what we are setting here, although I have no proof, i suspect that increasing the resolution will require this number to be increased as well.

I would highly recommend reading through the file to get a better understanding of what each of the properties above does.

Tip: You can use ‘Cntrl+W’ to search for items in the file since we have used the ‘nano’ command to open the file.

Use Cntrl+X to close the file. You will be prompted to save, type Y (for yes) and hit enter for the file name.

Well need to also create the directory we specified in targer_dir above and change the owner of that directory to motion. Use the following command to do this:

sudo mkdir /mnt/motionvideos
sudo chown motion /mnt/motionvideos

We’ll now enable daemon by using the following command:

sudo nano /etc/default/motion

Once the file is opened, change start_motion_daemon=no to yes. Then hit Ctrl+X to close and Y to save (same as above). Enabling daemon also means that motion will start on boot by default.

This completed the process of setting up motion. Now all we need to do is plug in the USB Webcam and start motion.
Note: If using an older version of The Rpi (Model A or B with 2 USB ports) if you have something plugged into the ports, you may require a USB hub. It is recommended that you use a powered hub as there is only a limited amount of power that the Rpi can supply.

To start motion, type:

ubuntu@RPI:~$ sudo service motion start
 * Starting motion detection daemon motion       [ OK ]

While motion is running, if you want to check the status via the log, type (this will show you the last 50 lines of the syslog file):

tail -n50 /var/log/syslog

To view your live stream, open Firefox (IE and Chrome will require plug-ins to view a live webcam) and in the address bar type [RPI’s IP Address]:8081. You can also view the stream via VLC.

Once you are satisfied and want to stop motion, use:

sudo service motion stop

Now navigate to /mnt/motionvideos and you should see some video(s) of what you just recorded. If your webcam was pointed at a still frame and no motion was detected, you may not have any files, so try again with the webcam pointed at an active location.

Videos tend to take up a lot of storage memory. If you don’t have the luxury of a large SD card, it might be worth storing the videos on network storage, Not only does this save some valuable space but also allows you to access the files from elsewhere, and in our case on a Windows machine. You may have realised that viewing the videos on the Rpi is not the best of options as it really struggles with video output. The next steps in this guide show how to store your files on a shared folder in a Windows machine.

Note: The videos stored in the /mnt/motionvideos folder will get hidden once we mount some storage.

Firstly, set up a new Windows account with basic rights and a password. The main reason behind this will be explained later. Once this is done, create a new folder where you would like to store the videos. Then share this folder by right-clicking on it and clicking on ‘Properties’. Next, go to the ‘Sharing’ tab and click on ‘Advanced Sharing’. Tick the ‘Share this folder’ option. Then click on ‘Permissions’ and allow ‘Everyone’ to have ‘Full Control’ i.e. change and read the contents. Once this is done, click on ‘OK’ for the ‘Permissions’ and ‘Advanced Sharing’ windows.

For screenshots on the process, visit http://www.howtogeek.com/176471/how-to-share-files-between-windows-and-linux/. Note that only the second half of the tutorial on that page is slightly different to what I do below on Linux.

Finally take note of the IP Address of the machine.

We have now configured Windows to share the folder. The next step is to access the folder from the Rpi. Firstly, we need to find the UserID (uid) and GroupID (gid) of the motion user. To find this, in the terminal, type:

cat /etc/passwd

[SNIP]
motion:x:103:106::/home/motion:/bin/false

Find ‘motion’ in that file. Take note of the 2 numbers after ‘motion:x:’. In this case they are 103 and 106 respectively.

Next, we will add the shared folder to our mount list. Use the command:

sudo nano /etc/fstab

Add a new line in the list and insert the following:

//[Windows machine IP Address]/[Shared folder] /mnt/motionvideos cifs auto,_netdev,username=[Windows Username],password=[Windows Password],uid=[First number from previous step],gid=[Second number from previous step] 0 0

Note: Make sure you remove the [] once you have entered the relevant information.
Save and close the file.

Now we will mount the shared folder.

sudo mount -a

To check that you have actually mounted it, ask the machine to show you all mounted drives/devices. Do this by:

sudo mount

You should see a line with the IP Address of your Windows machine.

Once this is set up, we can further test by using the following command:

cd /mnt/motionvideos
sudo touch test

Open up the shared folder in Windows and you should see a file names test. This shows that you have correctly configured the share. Run motion and check that video files are now being written to the shared folder.

Congratulations! That completes the guide. You should have successfully set up a motion detection webcam and a live server to view your webcam’s output by now. Feel free to play around with the motion.conf file and tweak it to suit your needs. You will see that there are several handy options such as image output, the ability to trigger external commands when motion is detected and other quite interesting features.

Blog

Figure 3: Screenshot showing some of the videos motion has captured

 

Article by: Umang Rajdev

Other links/Background reading: These are some of the links I referenced during the process. Feel free to browse through the pages.

http://www.instructables.com/id/Raspberry-Pi-remote-webcam/
http://www.elecfreaks.com/6084.html
http://www.lavrsen.dk/foswiki/bin/view/Motion/WebHome
http://pingbin.com/2012/12/raspberry-pi-web-cam-server-motion/
http://www.debian-administration.org/article/347/An_Introduction_to_Video_Surveillance_with_%27Motion%27

Mbed with the DS3231 RTC

I have a great little RTC from Love Electronics, it uses the DS3231 chip which is similar to the DS1307 (as shown in this article http://www.l8ter.com/?p=417)

image 

Since I am starting out using mbed (https://mbed.org) it is probably a good example to see if the DS3231 works with the LPC1768.  I started by using the DS1307 library as a guide, but then noticed that there are a few differences. (Note there is an mbed DS3231 library you can import)

image

If you have a look at the datasheets there are a few changes between the models, mainly the DS3231 has 2 alarms and is more accurate.

DS3231 http://datasheets.maximintegrated.com/en/ds/DS3231.pdf

DS1307 https://www.sparkfun.com/datasheets/Components/DS1307.pdf

DS1307

image

DS3231

image

Differences between the DS1307 and the DS1307:

1) Address 00h , BIT 7 , should be 0 not CH (Start stop feature does not exist on DS3231)

2) Address 02h for some reason the mask (on the example driver) was set to 196 but in reality you would want to ignore BIT 7 and BIT 6, use 0xC0 not 196. (Bug?)

3) Address 05h BIT 7 is flipped every time there is an overflow on the years register (06h), so you need to ignore this bit when reading the month.

 

Seconds..

When reading address 00g to get the seconds for some reason they are in the range 0 to 599 and loop every minute. This makes sense but it is not what the datasheet states – Range 00-59.

I built my own .NET Gadgeteer module…

Ever wondered how the pro’s physically manage to make a module? Well ok, you can get a fancy pick and place machine or send off for someone to assemble your module, but you can do it by hand. It is not as hard as expected.

As part of a .NET Gadgeteer hands on event at the Modern Jago in Shoreditch we were delighted to have Justin Wilson from Ingenuity Micro attend and show us how things are done (www.ingenuitymicro.com). He has designed and built an nice collection of .NET Gadgeteer modules and mainboards and expects to have them available shortly.

Justin has designed a multicolour LED module for .NET Gadgeteer, it is a 3 colour LED on a PCB with 2 Gadgeteer sockets. The module is nice as it is low cost simple and clean, and you can chain them together without the need for a co-processor on each module.

So the article title may be misleading, I really just got to assemble a module that Justin designed, rather than designing a new one. However after the assembly I felt I really had built something. (It was also amazingly good fun)

1) The first step is to design a PCB and get it printed, there are plenty of places that will produce PCBs and it is common to use EAGLE (http://www.cadsoftusa.com) to draw the schematics. Justin came along with a sheet of “Ingenuity Micro RGB Pixel v1.0” PCBs and all the components that are needed for the assembly. It is worth mentioning that some of the components are very very small, the black dot in the picture below is a NAND gate, it has a total of 5 pins. When you compare it with the Gadgeteer socket you can see that this could be a troublesome task. This PCB looks a bit rough as they were just snapped along the edges to separate them from the sheet.

image

2) The next step is to add some solder paste to each of the pads where we plan to solder components. There are two methods.

  • The first is is to get a small screwdriver, or toothpick and put a tiny blob of paste on each and every pad that you need to solder components. This is very time consuming and can result in an uneven distribution of solder risking dry joints or bridging between pins. The workshop was over by the time I got to build my module so this was the method I had to use. 
  • The second method is to use a solder mask. This is a very thin piece of plastic (like a overhead transparency) with holes cut (usually laser cut) for all the points you want paste. The key is to align the mask up exactly with the PCB and then in one go spread the paste across the top. It is handy to make a holder to align the PCB and mask, as shown below. Wherever there is a hole in the mask, solder paste will be deposited. Be careful to remove the solder mask cleanly in one move so as to not smudge the paste that has been left behind. If you have the correct solder mask, the correct way round you will end up with every pad on the PCB nicely coated with solder paste. (EAGLE can produce solder masks: http://www.cadsoftusa.com , and there are companies that will laser cut your EAGLE mask out of plastic)

 image

3) It is easiest to place all the components on one side of the PCB in one go. Be sure to place all the components the correct way round! The RGB Pixel only has one component on the top, an SPI LED with 6 pads, which is nice an big, so easy to place. Check to see that all the pads/pins are correctly aligned and that there are no bridges of paste between pins. Don’t worry too much as the solder and flux will pull the component into place a bit.

4) Once all the components are in the correct place, it is time to heat the board. Some people use an oven, but as it was a workshop it is easier to use a Hot air rework station (basically a very hot air blower) they are amazingly cheep and I got one from Amazon, not too sure how long it will last, but cant complain as it was a real bargain. The hot air station goes up to 450C which is enough to burn almost anything! I run my one at 250C but check to see what temperature your paste recommends. Also you might want to check the maximum temperature tolerance of your components. The picture below shows the LED being heated, the solder starts gray and paste like, but when it comes up to temperature it will quickly flow around the pins/pads and collect in all the places you wanted it to be, just like magic. It will go silver and shiny when ready.

image 

5) Next step was to do the other side of the PCB.  This was more complex as there are 2 capacitors (C1,C2), 1 resister (R1) , 1 NAND gate and 2 Gadgeteer sockets and they are all really tiny, the paste helps hold them in place. (Completed module below)

image

6) The next step is to test the module, and write some software. If you are writing a driver for a new module, be sure to follow the module builders guide http://gadgeteer.codeplex.com/releases/view/105388

Justin provided a driver for his module, so it was just a case of testing the various methods. There is a code sample below.

  1. using System;
  2. using System.Collections;
  3. using System.Threading;
  4. using Microsoft.SPOT;
  5. using Microsoft.SPOT.Presentation;
  6. using Microsoft.SPOT.Presentation.Controls;
  7. using Microsoft.SPOT.Presentation.Media;
  8. using Microsoft.SPOT.Touch;
  9.  
  10. using Gadgeteer.Networking;
  11. using GT = Gadgeteer;
  12. using GTM = Gadgeteer.Modules;
  13. using Gadgeteer.Modules.IngenuityMIcro;
  14.  
  15. namespace IngenuityMicroPixelTest
  16. {
  17.     public partial class Program
  18.     {       
  19.         void ProgramStarted()
  20.         {                       
  21.             Debug.Print("Program Started");
  22.             RgbPixel led = new RgbPixel(6);
  23.             led.NumPixel(1);            
  24.             led.Set(0, 255, 0,0, true);//RED
  25.             led.Set(0, 0, 255, 0, true);//GREEN
  26.             led.Set(0, 0, 0, 255,   true);//BLUE
  27.             led.Set(0, 255, 255, 255, true);//WHITE
  28.  
  29.             led.Fade();            
  30.         }
  31.     }
  32. }

Fantastic it worked !!! (Trying not to be too surprised) The module is great, you can chain multiple together and they are very bright! Check out the colours below…, it is a multicolour LED with 0-255 for each colour, resulting in a possible 16million different colours.

image

image

image

Hardware used:

Love Electronics USB DC Power Module:

http://gadgeteering.net/module/love-electronics-usb-dc-power-module

GHI FEZ Cerberus:

http://gadgeteering.net/mainboard/ghi-fez-cerberus

RGB Pixel : Ingenuity Micro

 

Happy gadgeteering.net

PIR module for .NET Gadgeteer (Motion Sensor)

(Module: http://gadgeteering.net/module/ghi-pir-sensor)

We met up last night to have a look at a .NET Gadgeteer module, picking a nice easy module to look at to begin with for: http://www.meetup.com/GadgeteerSouthCoast/. Despite being called the GHI PIR Sensor it is found in the toolbox as a “Motion Sensor”

image

Here is the view from the top of the module:

WP_20130524_012

There is just the one event which is triggered whenever the sensor spots an IR source (Heat/person), called Motion_Sensed. There is a property that can be checked to see if the sensor can still see the source once it was triggered.

image

I would have expected to see a data sheen in codeplex, http://gadgeteer.codeplex.com/SourceControl/latest, but there is nothing. Next stop is the module manufacture website/forum. In this case it is GHI and I am not the first to ask questions about the PIR module:

Based on this information I have labelled the POTS and jumper below.

PIR

Motion_Sensed event

First lets see how often this event gets triggered. First add code for the event:

  1. Debug.Print("Program Started");
  2. startTime = DateTime.Now;
  3. motion_Sensor.Motion_Sensed += new GTM.GHIElectronics.Motion_Sensor.Motion_SensorEventHandler(motion_Sensor_Motion_Sensed);

And here is what we will execute every time it is triggered:

  1. void motion_Sensor_Motion_Sensed(GTM.GHIElectronics.Motion_Sensor sender, GTM.GHIElectronics.Motion_Sensor.Motion_SensorState state)
  2. {
  3.     Debug.Print("Time span = " + (DateTime.UtcNow – startTime).ToString());
  4.     startTime = DateTime.UtcNow;
  5.     Debug.Print("TRIGGER");
  6. }

By setting the time POT to min (anti-clockwise) so the events will be triggered as often as possible. (The distance POT is at the mid point). We get the following output.

Using mainboard GHI Electronics FEZHydra version 1.2
Program Started
Sensor = False
Time span = 00:00:09.9934515
TRIGGER
The thread ‘<No Name>’ (0×3) has exited with code 0 (0×0).
Time span = 00:00:07.8206963
TRIGGER
Time span = 00:00:07.7379289
TRIGGER
Time span = 00:00:06.4709773
TRIGGER
Time span = 00:00:08.7667315
TRIGGER
Time span = 00:00:07.6156723
TRIGGER
Time span = 00:00:06.2445184
TRIGGER
Time span = 00:00:06.6116992

By setting the time POT to max (Clockwise) we get the following output:

Time span = 00:03:42.0452275
TRIGGER
Time span = 00:03:58.3361766
TRIGGER
Time span = 00:03:41.4596557
TRIGGER
Time span = 00:03:40.7306982

So the time range is around 7sec (min) to 230sec (max) which is in keeping with the data sheet, which I think says 5 sec – 300 sec :

image

Changing the distance POT certainly did change range or sensitivity but it was not so easy to work out the exact trigger distances. however the datasheet shows a range or around 5 –7m and it certainly gets triggered within those ranges.

image

SensorStillActive property

There is a property that is readable and is of type Boolean, but what can we use it for?

If we run in with the time POT on minimum, distance on 50% and the jumper set to “Repeatedly trigger” we get the following output:

TRIGGER
State changed to: False @ 00:00:00.0011277
State changed to: True @ 00:00:02.2871399
Time span = 00:00:09.5215143
TRIGGER
State changed to: False @ 00:00:00.0650803
State changed to: True @ 00:00:02.4655180

And if we set the jumper to “Single trigger” we get the following:

TRIGGER
State changed to: False @ 00:00:00.0010278
State changed to: True @ 00:00:08.3824435
Time span = 00:00:11.9519795
TRIGGER
State changed to: False @ 00:00:00.0300698
State changed to: True @ 00:00:03.0303744
Time span = 00:00:06.5282752
TRIGGER
State changed to: False @ 00:00:00.0019021
State changed to: True @ 00:00:02.5016538
Time span = 00:00:04.7239091
TRIGGER
State changed to: False @ 00:00:00.0771596
State changed to: True @ 00:00:02.9764812
State changed to: False @ 00:00:06.4768678
Time span = 00:00:06.4945881
TRIGGER
State changed to: True @ 00:00:07.4823129
Time span = 00:00:11.0135769

So I am unsure what the jumper does, but the event is triggered when the SensorStillActive goes to false, it is held false for around 3 seconds then returns to true. The trigger happens every 6sec as expected. I suspect this will get held to false as long as the sensor can still detect an IR source, however my one always stops sensing despite being waved around at a source. Here is what the datasheet has to say about the jumper:

image

 

Have some source code:

  1. using System;
  2. using System.Collections;
  3. using System.Threading;
  4. using Microsoft.SPOT;
  5. using Microsoft.SPOT.Presentation;
  6. using Microsoft.SPOT.Presentation.Controls;
  7. using Microsoft.SPOT.Presentation.Media;
  8. using Microsoft.SPOT.Touch;
  9.  
  10. using Gadgeteer.Networking;
  11. using GT = Gadgeteer;
  12. using GTM = Gadgeteer.Modules;
  13.  
  14. namespace CompassDriverExample
  15. {
  16.     public partial class Program
  17.     {
  18.         private DateTime startTime;
  19.         private bool sensorState;
  20.            
  21.         void ProgramStarted()
  22.         {
  23.             Debug.Print("Program Started");
  24.             startTime = DateTime.Now;
  25.             sensorState = motion_Sensor.SensorStillActive;
  26.             motion_Sensor.Motion_Sensed += new GTM.GHIElectronics.Motion_Sensor.Motion_SensorEventHandler(motion_Sensor_Motion_Sensed);
  27.             GT.Timer tmr = new GT.Timer(50, GT.Timer.BehaviorType.RunContinuously);
  28.             tmr.Tick += new GT.Timer.TickEventHandler(tmr_Tick);
  29.             tmr.Start();
  30.         }
  31.  
  32.         
  33.         void tmr_Tick(GT.Timer timer)
  34.         {
  35.             if (motion_Sensor.SensorStillActive != sensorState)
  36.             {
  37.                 Debug.Print("State changed to: " + motion_Sensor.SensorStillActive + " @ "+ (DateTime.UtcNow – startTime).ToString());
  38.                 sensorState = motion_Sensor.SensorStillActive;
  39.             }
  40.         }
  41.  
  42.         void motion_Sensor_Motion_Sensed(GTM.GHIElectronics.Motion_Sensor sender, GTM.GHIElectronics.Motion_Sensor.Motion_SensorState state)
  43.         {
  44.             Debug.Print("Time span = " + (DateTime.UtcNow – startTime).ToString());
  45.             startTime = DateTime.UtcNow;
  46.             Debug.Print("TRIGGER");
  47.         }
  48.     }
  49. }

I want to edit the .NET Gadgeteer drivers for a particular module!

When you drag a module from the toolbox onto the designer it references the correct DLL’s and adds import statements to the program.cs.

image

Fig 1: Adding a compass module to the designer.

This way you are ready to just use the module in your code. This is the normal way to to things… However what if you want to make a change to the driver for a particular module, for example to:

  • add an event, method or property,
  • perhaps fix an annoying bug,
  • add extra debug, or work out how the hardware works,
  • load a different driver, such as the GPS/Bluetooth module drivers from Codeplex.com ,
  • copy the driver for a similar piece of hardware,
  • you want to compile a driver for a different version of NetMF (e.g. upgrade 4.1 –> 4.2)
  • you cant be bothered to get WIX setup and working just to make a small change.

The correct way to change a driver is to get the source code VS solution, edit it and then rebuild. This will build an installer (msi) which can be installed and distributed with the changes intact. This process is fine for the final edits and distribution of a driver, but what if you just want to try a few changes? This is a long process as each change results in a build, reinstall , test.

This post is designed to show how to simply edit drivers or include different drivers. It is NOT a substitute for building the install files which should be distributed with each hardware module.

Example

Lets say we want to make an edit to the compass driver, perhaps it does not function as we expected. Here is a summary of what to do:

  1. Get the source code. (Either the existing source, or substitute with a different one)
  2. Remove the existing driver from your solution.
  3. Add the (new) source code to your solution.
  4. Add the module to your project.
  5. Edit the driver and test.

Get the source code.

There are multiple drivers for some modules, and the alternatives can often be better. (Well at least until the manufacturer incorporates the changes into the distributed driver.)

If you are looking for other drivers try: http://www.codeplex.com/site/search?query=.net%20gadgeteer

The source for the distributed drivers lives here: http://gadgeteer.codeplex.com/SourceControl/latest

I use SVN to get the latest source, but if you just want the source then click the download button. It will give you a zip, be sure to expand it to somewhere sensible, we will be using it later.

image

Remove the existing driver from your solution.

This step is important as it can drive you crazy later by swapping your new driver with the old one when you least suspect it.

Here is some background information first:

If we look a the code behind the designer we can see what happens when the compass module is added.

image

Click “Show all files” if you cannot see the “Program.gadgeteer.cs” file. This file is generated EVERY time you save the designer (the place you drag modules onto). Inside this file there is the following code. (For this example project)

  1. namespace CompassDriverExample {
  2.     using Gadgeteer;
  3.     using GTM = Gadgeteer.Modules;
  4.     
  5.     
  6.     public partial class Program : Gadgeteer.Program {
  7.         
  8.         private Gadgeteer.Modules.Seeed.Compass compass;
  9.         
  10.         public static void Main() {
  11.             // Important to initialize the Mainboard first
  12.             Program.Mainboard = new LoveElectronics.Gadgeteer.ArgonR1();
  13.             Program p = new Program();
  14.             p.InitializeModules();
  15.             p.ProgramStarted();
  16.             // Starts Dispatcher
  17.             p.Run();
  18.         }
  19.         
  20.         private void InitializeModules() {
  21.             this.compass = new GTM.Seeed.Compass(8);
  22.         }
  23.     }
  24. }

The first line of importance creates a compass variable of type Gadgeteer.Modules.Seeed.Compass.

  1. private Gadgeteer.Modules.Seeed.Compass compass;

The second line of importance is the one that creates an instance of the compass object and specifies the socket number.

  1. this.compass = new GTM.Seeed.Compass(8);

In order to create the compass object it needs to know about it. This is achieved automatically by adding a reference to the compiled DLL, when you dragged in a new module.

image

Here are the things you need to change to remove the old driver:

1) Delete the module from the designer.

- This will remove it from the generated code and will stop the auto generated code swapping back to the old driver when you save.

2) Check that the references on your project no longer reference the DLL for the old driver. (highlighted above)

or

You could just start with a blank project.

The reason I show all this is that later on you will need to instantiate the object yourself. If you don’t know what to write you can copy the auto generated code … more later.

Add the (new) source code to your solution.

At this point you have a blank project or one that does NOT reference/use the module that you plan to edit the driver for. First we need to add the source code, this is the code that you downloaded earlier or can be a different driver that you wish to try.

image

In Visual studio right click on the solution and add an existing project by navigating to the correct project. (This can be a little confusing as it tends to be a long way down the folder structure, and you need to be sure of the version you are using, v4.2)

image

Inside the Compass –> software folder you are looking for the compass.csproj file. (I would expect to see 2 versions, one for 4.1 and one for 4.2, but this may not be the case if an old driver template was used.)

image 

A little note: In this particular case there is just a driver for the 4.1 version of NetMF, yet when you install the latest SDK the compass is available in 4.2. This means that the manufacturer has updated the driver but not uploaded the changes to Codeplex. Be sure to encourage manufacturers to keep the source updated. This could also be the case if the Codeplex source code has different methods/signatures. Since my project is v4.2 I will just choose to update the target framework in the project properties.(Good example of how to upgrade versions)

image 

Add the module to your project

At this stage you should have multiple projects in your solution. Your project as well as the project for the module you are wanting to edit. So now we have all the source code in place. Lets use it.

First we need to let our project know about the new project by adding a reference, but instead of referencing the DLL we reference the project with the source code.

image

It is under projects not the .NET tab:

image

You will then see a reference to the Compass project, note that this is different to the reference show above when you let the designer import it.

image

Now we are ready to use the module. Remember all the automatically generated code that the designer does for you, well now you have to do it yourself. Instead of adding to code to the generated file (you will loose it , when it auto generates) we add it to the main program.cs file. (only 2 lines)

First create a private variable for the compass then in the program started create an instance, on socket 8. Be sure not to use the socket in the designer – you will get a warning if you do so fear not.

  1. using System;
  2. using System.Collections;
  3. using System.Threading;
  4. using Microsoft.SPOT;
  5. using Microsoft.SPOT.Presentation;
  6. using Microsoft.SPOT.Presentation.Controls;
  7. using Microsoft.SPOT.Presentation.Media;
  8. using Microsoft.SPOT.Touch;
  9.  
  10. using Gadgeteer.Networking;
  11. using GT = Gadgeteer;
  12. using GTM = Gadgeteer.Modules;
  13.  
  14. namespace CompassDriverExample
  15. {
  16.     public partial class Program
  17.     {
  18.         private Gadgeteer.Modules.Seeed.Compass compass; //Declare the compass
  19.              
  20.         void ProgramStarted()
  21.         {
  22.             this.compass = new GTM.Seeed.Compass(8);// create an instance on socket 8
  23.             Debug.Print("Program Started");
  24.         }
  25.     }
  26. }

Edit the driver and test.

If you have made it this far then you have a solution that builds a module from source and instantiates the module, ready for use. Have a go to see if it works as expected and all the methods are there.

image

Now down to the best bit, you can now just change the module source and deploy as you normally would. The compass project will be rebuild every time. For example you may want to know exactly what the ‘Gain’ value is and how it is set. The module has a setgain method which takes a Gain property. The property ranges from Gain1 to Gian8, if you hold your mouse over the property it will display a tooltip, but lets go look at the source.

image

If you right click on a method (and have the source available) you can select “Go to definition” and it will take you to the code – got to love Visual Studio.

image

The method source simply writes a value to a register, where the value is of type ‘Gain’ (Right click on ‘Gain’ and go to definition)

  1. public void SetGain(Gain gain)
  2.         {
  3.             Write(Register.CRB, (byte)gain);
  4.         }

Here are the values :

  1. public enum Gain : byte
  2.         {
  3.             /// <summary>
  4.             /// +/- 0.88 Ga
  5.             /// </summary>
  6.             Gain1 = 0×00,
  7.  
  8.             /// <summary>
  9.             /// +/- 1.2 Ga (default)
  10.             /// </summary>
  11.             Gain2 = 0×20,
  12.  
  13.             /// <summary>
  14.             /// +/- 1.9 Ga
  15.             /// </summary>
  16.             Gain3 = 0×40,
  17.  
  18.             /// <summary>
  19.             /// +/- 2.5 Ga
  20.             /// </summary>
  21.             Gain4 = 0×60,
  22.  
  23.             /// <summary>
  24.             /// +/- 4.0 Ga
  25.             /// </summary>
  26.             Gain5 = 0×80,
  27.  
  28.             /// <summary>
  29.             /// +/- 4.7 Ga
  30.             /// </summary>
  31.             Gain6 = 0xA0,
  32.  
  33.             /// <summary>
  34.             /// +/- 5.6 Ga
  35.             /// </summary>
  36.             Gain7 = 0xC0,
  37.  
  38.             /// <summary>
  39.             /// +/- 8.1 Ga
  40.             /// </summary>
  41.             Gain8 = 0xE0,
  42.  
  43.         }

If you rename one of these values, it will have an immediate effect on your code. For example:

  1. public enum Gain : byte
  2.         {
  3.             /// <summary>
  4.             /// +/- 0.88 Ga
  5.             /// </summary>
  6.             Gain1RenamedToSomethingBetter = 0×00,
  7.  
  8.             /// <summary>

Will immediately change in your program.cs

image

Job done, you are now able to edit the driver and have the changes take effect immediately. Compile is same as always and the solution will deploy your latest code to the hardware when you deploy. This way you can edit and change code until you are satisfied. Once that is done you can then go back to the Compass SOLUTION and build an installer…..

Advanced – where did these numbers come from?

But you have to wonder what these values are and what do they mean? Here is where we need the datasheet. A few folders up from the source code there is a ‘Hardware’ folder that has a datasheet. I you have a custom modules or there is not datasheet you can do very well by searching for the chip number. If stuck pester the manufacturer to add a link to the datasheet for the module. 

This compass module is a HMC5883 and there is a data sheet in the Hardware folder (in the downloaded zip from codeplex) :

image

These can be hard to read, but we can use the bits we are interested in. For example searching the datasheet for ‘Gain’ shows that there is a gain control and that there are 3 bits dedicated to changing it.

 image

Further on in the datasheet there is a section about ‘configuration register B’ this is for setting the gain. Remember that the gain method just writes a byte (8 bits) to a register (Register B), table 10 lets us know that the register is 1 byte (8 bits) – same as our code. So now we just need to check the values that are to be written. Table 10 on page 13 shows that bits 7,6,5 are for setting the gain and 4-0 must be set to zero. Table 12 shows the different gain values and shows how this changes the output.

image

For example the default gain is 1024 counts/Gauss and this is done by setting the 8 bits of the register to [0,0,1,0,0,0,0,0] but if we look at the code the values are written in hex not binary.

If you are not comfortable converting, believe it or not calculator is your friend! Change calculator into programmer mode:

image

Select binary and then input the binary you want to convert, in this case [0,0,1,0,0,0,0,0]  (note you do not need to leading zeros)

image

If you then select hex, it will convert the value. (Answer 20)

image

Since the compiler would not know if we meant 20 (twenty) or 20 (thirty two in decimal,or binary [0,0,1,0,0,0,0,0]) there is a prefix added to a number to let it know that it is hex, the prefix is ‘0x’. And sure enough if we look at the code the default gain is ‘0x20’ which is the correct value according to the data sheet.

  1. public enum Gain : byte
  2.         {
  3.             /// <summary>
  4.             /// +/- 0.88 Ga
  5.             /// </summary>
  6.             Gain1RenamedToSomethingBetter = 0×00,
  7.  
  8.             /// <summary>
  9.             /// +/- 1.2 Ga (default)
  10.             /// </summary>
  11.             Gain2 = 0×20,

Have a look around the datasheet, perhaps there are hardware features that are not implemented in code, so you can improve the performance, enhance the driver, solve bugs or trouble shoot issues.

For example register A can be used to set the sensor rate (default 15Hz) as well as a bias for different axis, but if you right click on the register name and select ‘find all references’, it shows that the driver never uses it….

image

Happy Gadgeteering…. check www.gadgeteering.net for a complete list of hardware.

Tips

If you are using the express version of Visual Studio it only supports one language. So if you have a VB project and import a C# project, you wont get any debug and things will look a bit strange.

Remember that since you are compiling from source you can now add a break point inside the driver! (F9) This is a fantastic way to halt execution and inspect what is going on, with out having to write tonnes of output lines……

.NET Gadgeteer is causing me to get a BSOD

There is a case, where you can get a blue screen of death (BSOD) where your computer just dies/reboots, usually during a NetMF or .NET Gadgeteer deploy and especially if you hit the reset button (on the device) or attempt to pull out the USB cable.

This problem is to do with the USB driver and it has been addressed, so why are you still getting it? Well you are using the wrong driver :-). The old drivers run in kernel mode and hence can cause a BSOD, but this has since been switched to WinUSB which runs in user mode, so cannot BSOD. There are some issues with WinUSB and virtualised installs and not all hardware supports it.

The solution: swap the driver. Note this does not work for all hardware. If you cannot swap the driver for various known reasons. Learn to recognise the symptoms to avoid a BSOD. Usually Visual studio just sits on the deployment stage, if you do almost anything it will BSOD. However if you click ‘Build –> Cancel’ and wait 2 sec it will cancel and you can reset/redeploy. (Note on VS Express you don’t have this menu item.)

GHI have a great article on this http://www.ghielectronics.com/docs/109/usb-drivers-choices-including-winusb

The steps below let you check to see what driver you are using and how to swap it. (Use the GHI description for more info or if you get stuck)

What driver am I currently using?

During a deploy I hit the reset button on my spider today and it rebooted my laptop, strange as I thought I was using the WinUSB driver. Open Device Manager and look for the Gadgeteer device. In the case of a Spider mainboard it will appear as a .NET debugable device. Here is the device driver information for the old USB drivers, so turns out I am running the wrong one, hence the BSOD.

DriverThatBSOD

Swapping the driver.

If you click “Update Driver” you can navigate to the GHI drivers directory and select the WinUSB driver.

image

Once that has completed you will see that new drivers have been loaded. Horary! The spider mainboard also now appears under a different section in device manager.

image

image

(WinUSB drivers)

 

Notes:

GHI firmware from 14Feb (listed as updated 18 Feb) 2013

VS 2010

Spider mainboard.