storage | ||
.editorconfig | ||
.gitignore | ||
Benchmarks_test.go | ||
Collection_test.go | ||
Collection.go | ||
Errors.go | ||
go.mod | ||
go.sum | ||
LICENSE | ||
README.md | ||
Storage.go | ||
StorageData.go |
ocean
In-memory key value store that saves your data to disk using JSON.
Installation
go get git.akyoto.dev/go/ocean
Example
// Define the User type
type User struct { Name string }
// Create a collection in ~/.ocean/myapp/User.dat
users := ocean.New[User]("myapp", &storage.File[User]{})
// Store some data
users.Set("1", &User{Name: "User 1"})
users.Set("2", &User{Name: "User 2"})
users.Set("3", &User{Name: "User 3"})
// Read from memory
first, err := users.Get("1")
// Iterate over all users
for user := range users.All() {
fmt.Println(user.Name)
}
File format
1
{"name":"User 1"}
2
{"name":"User 2"}
3
{"name":"User 3"}
Benchmarks
BenchmarkGet-12 275126157 4.462 ns/op 0 B/op 0 allocs/op
BenchmarkSet-12 4796011 251.0 ns/op 32 B/op 2 allocs/op
BenchmarkDelete-12 471913158 2.530 ns/op 0 B/op 0 allocs/op
BenchmarkNew-12 48838576 22.89 ns/op 80 B/op 1 allocs/op
Storage systems
nil
You can specify nil
as the storage system which will keep data in RAM only.
storage.File
storage.File
uses a single file to store all records.
Writes using Set(key, value)
are async and only mark the collection as "dirty" which is very quick.
The sync to disk happens shortly afterwards.
Every collection uses one goroutine to check the "dirty" flag, write the new contents to disk and reset the flag.
The biggest advantage of storage.File
is that it scales well with the number of requests:
Suppose n
is the number of write requests and io
is the time it takes for one write. Immediate storage would require O(n * io)
time to complete all writes but the async behavior makes it O(n)
.
You should use storage.File
if you have a permanently running process such as a web server where end users expect quick responses and background work can happen after the user request has already been dealt with.
Make sure you defer collection.Sync()
to ensure that queued writes will be handled when the process ends.