avatar

Best practice to implement count or likes on Firestore?

Best practice to implement count or likes on Firestore?

bounty icon
$45
Single winner
Asked  a year ago
Viewed  0 times

I know firebase has 1 write per second limit for documents, and I've read the document about Distributed counters.

But distributed counter means more reads, more reads leads to more cost.

Is there work around here?

  • add comment
avatar

There's no best parctice but only trade-off basing on your case.

It looks like you're building something like blog post, the reading cost concerns very much here.

Here's my solution as trading accuracy for cost efficiency.

You have to read through the official doc you talked about in the question, and then:

1.On client.js, you increment shards randomly into different shards collection, with a very different rate.

//client.js

if (Math.random() < 0.9) {
db.collection("blog")
.doc("blogID")
.collection("shards")
.doc(Math.floor(Math.random() * 5).toString())
.set({ count: firebase.firestore.FieldValue.increment(1) });
} else {
db.collection("blog")
.doc("someId")
.collection("shards-trigger-summing")
.doc(Math.floor(Math.random() * 5).toString())
.set({count: firebase.firestore.FieldValue.increment(1) });
}

2. Create a cloud function lets say its "onSumTriggeringCountWrited”.

// cloudfunction.ts
export const onSumTriggeringCountWrited = functions.firestore
.document("blogs/{blogId}/shards-trigger-summing/{shardId}")
.onWrite(async (snap, context) => {
const blogId = context.params.blogId;
const otherShards = (
await db.collection("blogs").doc(blogId).collection("shards").get()
).docs.map((doc) => doc.data());
const sumTriggeringShard = snap.after.data();
const allShards = [...otherShards, sumTriggeringShard];
const sum = allShards
.map((shard) => shard.count)
.reduce((prev, curr) => prev + curr, 0);
return db.collection("blogs").doc("blogId").set({ count: sum });
});

3. Now you get sum count from blog document with only one read, without reading through all shards every time.

It's not accurate,but doable in most case.

But you can always do a little fancy jobs like "if count under blog document is bigger than 10 , we trust it , otherwise, we traverse through all the shards to get accurate count"

  • Taken
  • add comment
  • 0
Sign In
Sign In
avatar

Best practice to implement count or likes on Firestore?

Best practice to implement count or likes on Firestore?

bounty icon
$45
Single winner
Asked  a year ago
Viewed  0 times

I know firebase has 1 write per second limit for documents, and I've read the document about Distributed counters.

But distributed counter means more reads, more reads leads to more cost.

Is there work around here?

  • add comment
avatar

There's no best parctice but only trade-off basing on your case.

It looks like you're building something like blog post, the reading cost concerns very much here.

Here's my solution as trading accuracy for cost efficiency.

You have to read through the official doc you talked about in the question, and then:

1.On client.js, you increment shards randomly into different shards collection, with a very different rate.

//client.js

if (Math.random() < 0.9) {
db.collection("blog")
.doc("blogID")
.collection("shards")
.doc(Math.floor(Math.random() * 5).toString())
.set({ count: firebase.firestore.FieldValue.increment(1) });
} else {
db.collection("blog")
.doc("someId")
.collection("shards-trigger-summing")
.doc(Math.floor(Math.random() * 5).toString())
.set({count: firebase.firestore.FieldValue.increment(1) });
}

2. Create a cloud function lets say its "onSumTriggeringCountWrited”.

// cloudfunction.ts
export const onSumTriggeringCountWrited = functions.firestore
.document("blogs/{blogId}/shards-trigger-summing/{shardId}")
.onWrite(async (snap, context) => {
const blogId = context.params.blogId;
const otherShards = (
await db.collection("blogs").doc(blogId).collection("shards").get()
).docs.map((doc) => doc.data());
const sumTriggeringShard = snap.after.data();
const allShards = [...otherShards, sumTriggeringShard];
const sum = allShards
.map((shard) => shard.count)
.reduce((prev, curr) => prev + curr, 0);
return db.collection("blogs").doc("blogId").set({ count: sum });
});

3. Now you get sum count from blog document with only one read, without reading through all shards every time.

It's not accurate,but doable in most case.

But you can always do a little fancy jobs like "if count under blog document is bigger than 10 , we trust it , otherwise, we traverse through all the shards to get accurate count"

  • Taken
  • add comment
  • 0