I’m working on a code that updates a log file locally inside a container, I also need to update the file to a bucket in S3, so that our frontend (internal) can tail the log-file contents.
I know for a fact that AWS S3 doesn’t support ‘Append’ to a file and can only store multiple versions of the same file, so trying to get the right approach here.
Before this requirement, our code was simple,
- Setup code
// a container local directory
logFilePath := filepath.Join(opts.ScratchDir.LogDir(), "combined.log")
// using afero file-system
logFile, err := env.Fs.OpenFile(logFilePath, os.O_CREATE|os.O_TRUNC|os.O_RDWR, 0o644)
if err != nil {
fmt.Fprintf(os.Stderr, "Failed to setup log file: %+vn", err)
os.Exit(1)
}
// tracer is a wrapper around our command stdout
tracer.SetVerbosity(opts.Verbosity)
tracer.Tee(logFile)
- Our long running command, the
tracer
framework will make sure the stdout ofbuild.Run
is sent to thecombined.log
// Run the build, hold onto the error
buildError := tracer.Trace("build", "Run", func() error {
return build.Run(&opts.Config, &env)
})
- Finish and upload. At the end of this step, the
combined.log
would be available in the container local volume which can be uploaded to S3 usingPutObject
// Close log file
tracer.UnTee()
err = logFile.Close()
if err != nil {
env.Tracer.Logf("Failed to close log file before copying to output: %s", err)
}
Requirement
Instead of having the file combined.log
generated at the end, I now would like to upload it to S3, as it grows e.g. over 30s/1m to S3, so that we can create a pre-sign URL for it and tail it on our frontend.
Approaches made
- Ran a go-routine with a ticker for 30s to upload the object to S3, but since the file i/o won’t be complete (my understanding) before
Close()
the contents are not present in S3 - Tried tailing pkgs like https://github.com/nxadm/tail, which offer new lines (
tail -f
), butPutObject
can’t keep writing to S3 for each new line found.