Fread seems to have an issue if it's more than around 100 bytes

Using normal file protocol under c++11.

File * filep = fopen(“filename”, “r”);

if (filep != NULL)
{
char buffer[500];
size_t size;
struct stat statBuff;
int status = stat(“filename”, &statBuff);
if (status == 0)
{
int numRead = 0;
numRead = fread(buffer, 500, 1, filep);
}
fclose(filep);
//other code.
}

This crashes during the fread call. If I reduce the size to 100 bytes it mostly passes, but if I reduce it to one byte and put it in a for loop to read one byte over and over until it reaches size, it works just fine and I get all my data. In this case, my file is only 374 bytes, so I know I’m not running off the end of the buffer.

Has anyone else seen this? This is with Legato 7.11 (R7) with a WP76xx system.

Your application is sanboxed ?

Please, it’s possible to show your adef file ?

Please, check in the documentation, the application def: maxFileSystemBytes and maxMemoryBytes.

Best regards,
Sylvain

Hi @spastor,
No, it’s not sandboxed as there are things I can’t do under sandboxed mode. It doesn’t appear to be a memory issue as I can either new, or declare a stack variable to contain the bytes read, and as long as I fread it one byte at a time, then all is well.

Here is the .adef, just standard stuff except that it’s not sandboxed.

sandboxed: false

executables:
{
TestProgram = (TestProgramPP)
}

bindings:
{
TestProgram.TestProgramPP.le_mdc → .le_mdc
TestProgram.TestProgramPP.le_data → .le_data
TestProgram.TestProgramPP.le_info → .le_info
TestProgram.TestProgramPP.le_mrc → .le_mrc
TestProgram.TestProgramPP.le_sim → .le_sim
}
processes:
{
envVars:
{
LE_LOG_LEVEL = DEBUG
}

run:
{
( TestProgram )
}

maxCoreDumpFileBytes: 512K
maxFileBytes: 512K
}

version: 1.0.0
maxFileSystemBytes: 512K

I tried to read more than 100 bytes using fread with release 7 and did not have any issues. Can you post the log if you are still facing this issue?

This was very early in the release process for the WP76xx, probably R6. I found that the c++ ifstream works, so I’ve moved on.