I do all of my software development either on Linux or MacOS and usually both at the same time. Years ago, Apple eliminated gcc and g++ from their tool chain and switched to clang. Clang is available on all *nix platforms but I¡¯m not sure if it¡¯s compatible with cross-compiling to STM binaries. The nicest thing about clang is its more human-readable warnings and errors. I don¡¯t know if it offers profiling though and can¡¯t state anything about its execution efficiency w.r.t. gcc/g++. Still, since clang is Apple¡¯s default compiler, I decided to use it under Linux as well to keep my make files consistent across both platforms. I also use valgrind on Linux to check for memory leaks. Hans, you¡¯ve said you don¡¯t use dynamic memory allocation so that¡¯s not an issue. Of course, you still need to worry about stack overflow. Floating point division is a CPU hog, considerably more than floating point multiplication. It¡¯s a necessary evil for statistics calculations such as averages, variance, and standard deviation. Like Hans, I¡¯ll take every opportunity to avoid division and implement it as multiplication if possible. One nice feature in FPGA development is the ability to trade off DSP block utilization for execution speed. I can add more DSP blocks for faster execution time or allow slower execution speed to save DSP blocks.? It can be tricky determining the best course of action in CPU environments where you don¡¯t have control over the hardware. But it certainly inspires innovation and deep learning of what works and what doesn¡¯t. Tony AC9QY On Wed, Apr 16, 2025 at 6:52?AM Chris, G5CTH via <chris.rowland=[email protected]> wrote:
|