Hacker news

  • Top
  • New
  • Past
  • Ask
  • Show
  • Jobs

A 40-line fix eliminated a 400x performance gap (https://questdb.com)

368 points by bluestreak 5 days ago | 78 comments | View on ycombinator

ot 5 days ago |

You can do even faster, about 8ns (almost an additional 10x improvement) by using software perf events: PERF_COUNT_SW_TASK_CLOCK is thread CPU time, it can be read through a shared page (so no syscall, see perf_event_mmap_page), and then you add the delta since the last context switch with a single rdtsc call within a seqlock.

This is not well documented unfortunately, and I'm not aware of open-source implementations of this.

EDIT: Or maybe not, I'm not sure if PERF_COUNT_SW_TASK_CLOCK allows to select only user time. The kernel can definitely do it, but I don't know if the wiring is there. However this definitely works for overall thread CPU time.

shermantanktop 5 days ago |

Flamegraphs are wonderful.

Me: looks at my code. "sure, ok, looks alright."

Me: looks at the resulting flamegraph. "what the hell is this?!?!?"

I've found all kinds of crazy stuff in codebases this way. Static initializers that aren't static, one-line logger calls that trigger expensive serialization, heavy string-parsing calls that don't memoize patterns, etc. Unfortunately some of those are my fault.

jerrinot 5 days ago |

Author here. After my last post about kernel bugs, I spent some time looking at how the JVM reports its own thread activity. It turns out that "What is the CPU time of this thread?" is/was a much more expensive question than it should be.

jonasn 5 days ago |

Author of the OpenJDK patch here.

Thanks for the write-up Jaromir :) For those interested, I explored memory overhead when reading /proc—including eBPF profiling and the history behind the poorly documented user-space ABI.

Full details in my write-up: https://norlinder.nu/posts/User-CPU-Time-JVM/

furyofantares 5 days ago |

> Flame graph image

> Click to zoom, open in a new tab for interactivity

I admit I did not expect "Open Image in New Tab" to do what it said on the tin. I guess I was aware that it was possible with SVG but I don't think I've ever seen it done and was really not expecting it.

pjmlp 5 days ago |

Which goes to show writing C, C++ or whatever systems language isn't automatically blazing fast, depending on what is being done.

Very interesting read.

higherhalf 5 days ago |

clock_gettime() goes through vDSO, avoiding a context switch. It shows up on the flamegraph as well.

goodroot 5 days ago |

The QuestDB team are among the best doing it.

Love the people and their software.

Great blog Jaromir!

burnt-resistor 5 days ago |

I really wished™ there was an API/ABI for userland- and kernelland-defined individual virtual files at arbitrary locations, backed by processes and kernel modules respectively. I've tried pipes, overlays, and FUSE to no avail. It would greatly simply configuration management implementations while maintaining compatibility with the convention of plain text files, and there's often no need to have an actual file on any media or the expense of IOPS.

While I don't particularly like the IO overhead and churn consequences of real files for performance metrics, I get the 9p-like appeal of treating the virtual fs as a DBMS/API/ABI.

otterley 5 days ago |

It took seven years to address this concern following the initial bug report (2018). That seems like a lot, considering how instrumenting CPU time can be in the hot path for profiled code.

Ono-Sendai 5 days ago |

"look, I'm sorry, but the rule is simple: if you made something 2x faster, you might have done something smart if you made something 100x faster, you definitely just stopped doing something stupid"

https://x.com/rygorous/status/1271296834439282690

ee99ee 5 days ago |

This is such a great writeup

squirrellous 5 days ago |

Does anyone knowledgeable know whether it’s possible to drastically reduce the overhead of reading from procfs? IIUC everything in it is in-memory, so there’s no real reason reading some data should take the order of 10us.

mgaunard 5 days ago |

Obviously a vdso read is going to be significantly faster than a syscall switching to the kernel, writing serialized data to a buffer, switching back to userland, and parsing that data.

xthe 5 days ago |

This is a great example of how a small change in the right place can outweigh years of incremental tuning.

amelius 5 days ago |

It's kinda crazy the amount of plumbing required to get a few bits across the CPU.

tomiezhang 5 days ago |

cool