Core dump when running script to get a file

No, because this is a public forum and for security measures it has keys in the script. But you can do it by creating an AWS S3 resource and putting a 6 Meg file there.

Here is the template script:
#!/bin/sh
outputFile=“Your_PATH”
amzFile=“AMAZON_FILE_PATH”
bucket=“YOUR_BUCKET”
resource=“/${bucket}/${amzFile}”
contentType=“application/x-compressed-tar”
dateValue=date -R
stringToSign=“GET\n\n${contentType}\n${dateValue}\n${resource}”
s3Key=“YOUR_S3_KEY”
s3Secret=“YOUR_S3SECRET”
signature=echo -en ${stringToSign} | openssl sha1 -hmac ${s3Secret} -binary | base64

curl -H “Host: ${bucket}.s3.amazonaws.com”
-H “Date: ${dateValue}”
-H “Content-Type: ${contentType}”
-H “Authorization: AWS ${s3Key}:${signature}”
https://${bucket}.s3.amazonaws.com/${amzFile} -o $outputFile

You will need to compile base64 from the gnu coreutils as that isn’t included in the image. I used v8.25 since that matches the other things that are on Yocto.

I even went so far as to create a service which would start the binary created in the last exercise at boot (after everything), wait for an internet connection, then go get the file. However, I can’t seem to get it to automatically start for some reason. I can run it from the command line with /etc/init.d/scriptname.sh start, and that works. So I thought that I could run it from the Legato app that way, since I’m not actually starting it. However, I get the same issue, although the cutoff point seems to be slightly higher, but not much.