ALCAM board time set command isn't interpreting seconds value correctly

I’m running an ALCAM camera board with an Arduino Pro Mini. I’m finding that either I’m not understanding the T S> command format of the ALCAM camera board, or the ALCAM is not interpreting the time value as specified in the documentation.

For example, following the process outlined on Pages 34-35 of the ALCAM SoC Processor pdf, I am trying to set the processor’s clock after a reboot, using a date and time of 2015-07-21 15:00:30, using the following command:

T S>46F5780F

Checking the time with the “T G” command after the time set command yields the following response:
15:00:15
2015-07-21
!00
which has the seconds value incorrectly specified. It should be returned as 30 seconds; instead I get 15 seconds back. This incorrect seconds value is also being written as the timestamp in the picture that is taken immediately after this in my code, so it’s not just an issue of the ALCAM responding with a seconds value that hasn’t been multiplied by 2, but rather the processor is using the incorrect value of 15 seconds internally for other operations.

The User Manual outlines the process of encoding the date and time as the hex string given above, and states that the seconds value needs to be divided by 2 to fit in the lowest 5 bits of the 32-bit hex timestamp. In my code, I divide my true seconds value (30) by 2, yielding 15, and add that on as the end of the timestamp, giving the final f on the hex timestamp 46F5780F.


now = rtc.now(); // Based on the common Arduino RTClib, this returns a timestamp, in this case we'll assume 2015-07-21 15:00:30.
		String cmd = "T S>"; // beginning of time set command
	// Now add the date and time string on.
	uint32_t timeVal = 0;
	uint32_t subval = now.year() - 1980; // Calculate the ALCAM year offset
	timeVal += (subval << 25);	// left shift up to bits 31-25 of timeVal
	// Insert month
	subval = now.month();
	timeVal += (subval << 21);	// left shift up to bits 24-21 of timeVal
	// Insert day
	subval = now.day();
	timeVal += (subval << 16);	// left shift up to bits 20-16 of timeVal
	// Insert hours
	subval = now.hour();
	timeVal += (subval << 11);	// left shift up to bits 15-11 of timeVal
	// Insert minutes
	subval = now.minute();
	timeVal += (subval << 5);	// left shift up to bits 10-5 of timeVal
	// Insert seconds, divided by 2 as specified by ALCAM documentation
	subval = now.second() / 2;
	timeVal += subval;			// Insert in lower bits 4-0 of timeVal
	// Convert the upper 16 bits to a hex string
	char buf[14];
	sprintf(buf, "%04X",timeVal>>16);
	cmd = cmd + buf;	// Add to the cmd String
	// Then convert the lower 16 bits to a hex string
	sprintf(buf, "%04X",timeVal);
	cmd = cmd + buf;	// Add to the cmd String
	cmd = cmd + "\n";	// Add the line feed
  // The String cmd next gets converted to a character array using the Arduino String function toCharArray()
cmd.toCharArray(buf, 25); // Convert String object to character array 'buf'
 // and gets sent to the ALCAM using the WriteCmd() function from the ALCAM_driver library. 


It’s possible I’ve written that code wrong, but it sure seems like this code conforms to the spec in the ALCAM pdf, and I should get back a time value of 15:00:30.

@ Bmaf -

Not sure I understand about your question.

You can only set the second from 0->31, higher than that value then it will be half off.
The second counter will start if input value from 0->31, or half if 32->59.
following your code, you set 15 (30/2) so when get time it will show 15, that is correct.
don’t call second()/2 then you will get 30.
Above is input (set time)

Out put (Get time) can show from 0->59, nothing to do with that.

Also, 16 bit buffer should be buf[16], not sure why yours is 14.

Okay, I misunderstood what the user manual was trying to convey. So there is no way to accurately set the ALCAM clock if you attempt to set it during the 32-59 second portion of a minute? If you want accuracy at the seconds level, you must set the clock during the first half of a minute, 0-31?

I figured (wrongly) that the processor would just assume every seconds value being passed to it was divided by 2 by the user (presumably using integer division and dropping any 0.5 remainder), and thus the processor would take the input value, multiply by 2, and end up with a seconds value that was 0 or 1 second off of the true time, but could result in a value anywhere between 0 and 59 (though technically only up to 58). For example, in my naive view of an ideal world, a real-world seconds value of 44 would get halved by the user (=22), and the processor would read 22, multiply that value by 2, and set its internal clock to 44 seconds. A real-world seconds value of 2 would get divided by 2 (=1), and the processor would then multiply it by 2 to end up with an internal clock value of 2 seconds. That seemed like the sensible thing to do based on my reading of the user manual, but it seems the programmers have gone a different direction and just eliminated the possibility of setting seconds values above 31? I’d be curious to know what the reasoning was behind that decision.

Regarding my

declaration, I’m just using buf to hold the ASCII representation of the time set command, which should at most contain 12 characters plus the line feed \n and a null terminator \0. [code=cpp]T S>46F5780F\n\0[/code
is 14 chars. The 16 bit value gets converted into 4 ASCII characters (4 bytes) representing the 16-bit hex value of the upper, and then later lower, halves of the 32 bit time stamp value that was written into timeVal. Eventually at the bottom I overwrite buf one last time to put all 14 bytes of the time set command into it.

A slightly modified version of the code:

		String cmd = "T S>"; // beginning of time set command
	// Now add the date and time string on.
	uint32_t timeVal = 0;
	uint32_t subval = now.year() - 1980; // Calculate the ALCAM year offset
	timeVal += (subval << 25);	// left shift up to bits 31-25 of timeVal
	// Insert month
	subval = now.month();
	timeVal += (subval << 21);	// left shift up to bits 24-21 of timeVal
	// Insert day
	subval = now.day();
	timeVal += (subval << 16);	// left shift up to bits 20-16 of timeVal
	// Insert hours
	subval = now.hour();
	timeVal += (subval << 11);	// left shift up to bits 15-11 of timeVal
	// Insert minutes
	subval = now.minute();
	timeVal += (subval << 5);	// left shift up to bits 10-5 of timeVal
	// Insert seconds
	if (now.second() <= 31) {
		// If your seconds value is 31 or less, you can set the correct time
		subval = now.second();
	} else {
		// If your seconds value is 32-59, tough luck, the ALCAM time set code
		// can't handle it, because reasons... 
		subval = 0;
	}
	timeVal += subval;			// Insert in lower bits 4-0 of timeVal
	// Declare a char string array 'buf' that will ultimately hold the time set ASCII command
	// Expect it to hold 12 human readable digits plus a line feed \n and a null terminator \0
	char buf[14];
	// Convert the upper 16 bits to a 4-character ascii string representation of the hex
	sprintf(buf, "%04X",timeVal>>16);
	cmd = cmd + buf;	// Add to the cmd String that currently contains "T S>"
	// Then convert the lower 16 bits to a 4-digit hex string with leading 0's if needed
	sprintf(buf, "%04X",timeVal);
	cmd = cmd + buf;	// Add the lower 4 digits to the cmd String
	cmd = cmd + "\n";	// Add the line feed to the cmd String object
	cmd.toCharArray(buf, 14); // Convert String object 'cmd' to character array 'buf'
// send 'buf' off to the WriteCmd() function to set the ALCAM clock


@ Bmaf -

I knew you are thinking like this way, but it is not correct.

If the value is 31, then it will start from 31, 32, 33, 34… 59, 0, 1…
If the value is 32, then it will start from 16, 17, 18, 19… 59, 0, 1…
Document doesn’t say multiply by 2 anywhere.

I agree that the documentation doesn’t say multiply by 2 anywhere. I’m just trying to wrap my head around what I view as an incredibly strange design decision to not implement any way to set seconds values greater than 31, since it could have been done with a minor loss of precision (1 second) by having the processor multiply the incoming values by 2. Does GHI disagree that removing the ability to set seconds values above 31 is a poor move? Did they not envision that a user might ever want to set the time during the latter half of a minute?

The whole reason this comes up is in relation to my other post lamenting the lack of any way to put the ALCAM board into a low-power sleep state. Instead I’ve been cutting power to it completely in between pictures, so that I’m only taking 2-4 pictures per minute and sleeping the Arduino the rest of the time. But cutting power completely erases the ALCAM’s clock, and if I want to set it during a reboot in the 2nd half of a minute, there is no way to set the seconds value correctly.

@ Bmaf - this is a standard encoding used in any FAT file system. It goes back over 30 years. Why like that? Ask Bill gates :slight_smile:

Gus, I see what you mean about that being the standard FAT timestamp format (seconds divided by 2). But why then does the ALCAM not respect the fact that seconds are meant to be divided by 2, and take the incoming seconds/2 value and multiply it by 2 to get back to the original seconds value? The current ALCAM implementation seems to be lacking the ability to properly interpret any FAT timestamp with a seconds value greater than 31. Instead it just takes the seconds/2 value that the standard uses, and keeps it as seconds/2. So it can properly interpret 1-second intervals when the seconds are 0 to 31, but it cannot interpret seconds values properly when true seconds are 32-59. If I were to write code that took any seconds value > 31 and divided it by 2 before encoding and sending it to the ALCAM, the ALCAM would incorrectly interpret the seconds value, leaving it as the divided-by-2 value when setting the internal clock.

I guess what I’m trying to get across here is that this looks like a bug. I’ve tried to explain the how the ALCAM is limited, since it cannot currently interpret any seconds value greater than 32 correctly. If the ALCAM processor abided by the FAT standard, it would take all timestamp seconds values and multiply them by 2 to reconstruct the actual seconds value (or 1 second off the actual seconds value) for its internal clock.

Not sure how alcam handles it myself but the bit field has the seconds divided by 2. Meaning you can have any seconds value but only even numbers. Meaning 58 is possible but to set that the bit field needs to be 29.

I will check with the team internally as I am not sure how alcam handles this.

@ Gus - Your explanation makes sense, I just don’t think the ALCAM is handling it as you and the FAT timestamp standard describe. I can happily send odd numbers from 1 to 29 in the encoded timestamp, and the ALCAM will interpret them straight across, not as the divide-by-2 value the FAT standard outlines.

T S>46F6769D
gets interpreted as
14:52:29
2015-07-22
!00

Obviously trying to send a seconds value larger than 31 screws up the minutes value. It would be great if the ALCAM stuck to the FAT standard, since I can certainly put up with a +/- 1 second discrepancy, but a 30 second deviation from real time is pushing things a bit.

Then alcam is wrong and it needs a fix. We will look into this asap.