Forum Discussion
Altera_Forum
Honored Contributor
11 years agoSo looking at the code located in the jbistub.c file I found this;
void calibrate_delay(void)
{
int sample = 0;
int count = 0;
DWORD tick_count1 = 0L;
DWORD tick_count2 = 0L;
one_ms_delay = 0L;
# if PORT == WINDOWS || PORT == DOS
for (sample = 0; sample < DELAY_SAMPLES; ++sample)
{
count = 0;
tick_count1 = get_tick_count();
while ((tick_count2 = get_tick_count()) == tick_count1) {};
do { delay_loop(DELAY_CHECK_LOOPS); count++; } while
((tick_count1 = get_tick_count()) == tick_count2);
one_ms_delay += ((DELAY_CHECK_LOOPS * (DWORD)count) /
(tick_count1 - tick_count2));
}
one_ms_delay /= DELAY_SAMPLES;
# else
/* This is system-dependent! Update this number for target system */
one_ms_delay = 1000L;
# endif
} // end calibrate_delay() It looks like its trying to count the amount a clock ticks in between a certain time period and then calculate how many clock ticks it would take for 1 milisecond to occur. However I am not sure if this is being done correctly. Another thing to note is that I am using a Win7 Pro 64-bit machine with a 4 core CPU. I have noticted that when running the jam player code that exactly 25% CPU is being used (or one core). I am Not sure if that is related to the issue or not but thought it be worth mentioning.