Zephyr
Zephyr RTOS :: Device Tree :: Kconfig :: cmake
Overview
This NN page a holding point for notes and links to external references regarding Zephyr RTOS, and Zephyr RTOS Project. Some key features of Zephyr RTOS include its designers' incorporation of Device Tree Source and Kconfig frameworks in this real time operating system.
As of 2023 April this page under reorganization, pulling multiple local pages into this page and removing older and defunct content - TMH
For device tree and Kconfig see also:
- https://www.kernel.org/doc/html/latest/kbuild/kconfig-language.html
- https://github.com/ulfalizer/Kconfiglib Kconfig library parser and helper scripts project headed by Ulf Magnusson
TODO:
- [ ] Find the meaning of the wrapping function this Zephyr header provides `$workspace/zephyr/include/zephyr/dt-bindings/dt-util.h`.
- [ ] Review the definition and implications of `__ASSERT` symbol defined in `$workspace/zephyr/include/zephyr/sys/__assert.h`.
Contents
- 1 ^ Zephyr Releases
- 2 ^ Zephyr Tutorials
- 3 ^ Zephyr Toolchain Installation
- 4 ^ Zephyr Build Process
- 5 ----- ----- ----- ----- ----- ----- ----- ----- -----
- 6 ^ Zephyr Internals
- 7 ^ Zephyr Macros
- 8 ----- ----- ----- ----- ----- ----- ----- ----- -----
- 9 ^ Interrupts
- 10 ^ Zephyr Threads
- 11 ^ Zephyr Hardware Abstraction Layer
- 12 ^ Zephyr Drivers
- 13 ^ Zephyr Work Queues
- 14 ^ Zephyr RTOS Logger and Backend Features
- 15 ^ Zephyr thread_analyzer_cb And Related
- 16 ^ Zephyr Shell
- 17 ----- ----- ----- ----- ----- ----- ----- ----- -----
- 18 ^ Kernel Services
- 19 Zephyr Memory Management
- 20 ^ Zephyr Kernel Timing
- 21 ^ Zephyr Application Code Relocation
^ Zephyr Releases
Zephyr RTOS project itself, one of some one hundred twenty projects on Zephyr's Github page, has stable releases which become available once or twice a year. The Zephyr RTOS repository itself is generally at a cutting edge of development. It is `git tagged` or identified as a release candidate, until that point where the work has been finished to review all open issues and developments and create and tag a new stable release.
Nordic Semi forks a recent, but not necessarily most recent stable release of Zephyr RTOS Project as part of its NCS Software Development Kit.
A local NN wiki page with some early Zephyr release notes is Zephyr RTOS releases. There may not be much to save from this article, but contributor Ted to review it.
As of 2023 Q2 Zephyr RTOS stable release is v3.3.0.
^ Zephyr Tutorials
Troubleshooting IoT Cellular Connections with Nordic nRF9160 . . .
Mr. Green's Workshop tutorials on Zephyr and RPi Pico. This article talks about three ways of classifying Zephyr based projects based on their top directory location relative to the instance of Zephyr SDK against which they build.
^ Zephyr Toolchain Installation
Complete notes for Zephyr toolchain installation and config are provided at Zephyr Project's "Getting Started Guide".
^ Zephyr Build Process
Possible to use Kconfig to pass along compiler options, such as `CONFIG_COMPILER_OPT`:
----- ----- ----- ----- ----- ----- ----- ----- -----
^ Zephyr Internals
Zephyr RTOS run time starting point, for Zephyr v3.4.0 is function function z_cstart() in source file init.c.
Zephyr util.h header file:
-
${ZEPHYR_WORKSPACE}/zephyr/include/zephyr/sys/util.h:43:#define POINTER_TO_UINT(x) ((uintptr_t) (x))
Error codes enumerated in errno.h:
-
zephyr/lib/libc/minimal/include/errno.h
^ Zephyr Macros
TODO 2024-07-31: add links and notes on Zephyr RTOS util.h header file, a collection of macros and helper functions for common task.
Zephyr RTOS employs a lot of macros throughout its code base. As a starting point, Zephyr's atomic services use macros including ATOMIC_BITMAP_SIZE. This macro is defined in file . . . atomic.h.
Excerpt from atomic.h:
85 * @brief This macro computes the number of atomic variables necessary to 86 * represent a bitmap with @a num_bits. 87 * 88 * @param num_bits Number of bits. 89 */ 90 #define ATOMIC_BITMAP_SIZE(num_bits) (ROUND_UP(num_bits, ATOMIC_BITS) / ATOMIC_BITS)
----- ----- ----- ----- ----- ----- ----- ----- -----
^ Interrupts
Zephyr RTOS provides some API and code facilities to support hardware interrupts, but each processor architecture has its own ways of implementing interrupts. This local page section to cover issues and solutions to Zephyr application design with interrupts, and how to code for a given target processor.
First references:
- https://docs.zephyrproject.org/3.3.0/kernel/services/interrupts.html#interrupts
- https://docs.zephyrproject.org/3.3.0/kernel/services/interrupts.html#c.k_is_in_isr
A Zephyr documentation on architecture porting, with references to lower level interrupt configuration code:
Zephyr's API macro for connecting a routine to a hardware interrupt is defined in `irq.h`:
31 /** 32 * @brief Initialize an interrupt handler. 33 * 34 * This routine initializes an interrupt handler for an IRQ. The IRQ must be 35 * subsequently enabled before the interrupt handler begins servicing 36 * interrupts. 37 * 38 * @warning 39 * Although this routine is invoked at run-time, all of its arguments must be 40 * computable by the compiler at build time. 41 * 42 * @param irq_p IRQ line number. 43 * @param priority_p Interrupt priority. 44 * @param isr_p Address of interrupt service routine. 45 * @param isr_param_p Parameter passed to interrupt service routine. 46 * @param flags_p Architecture-specific IRQ configuration flags.. 47 */ 48 #define IRQ_CONNECT(irq_p, priority_p, isr_p, isr_param_p, flags_p) \ 49 ARCH_IRQ_CONNECT(irq_p, priority_p, isr_p, isr_param_p, flags_p)
^ Zephyr Threads
An important executing element of a Zephyr based app is an application defined thread. Zephyr based applications written in C will normally have at least one app side thread , which Zephyr will default to giving the name `main`. This name reflects that application code begins executing from a function named `main()`, as is a long standing convention of C language.
Zephyr based applications can define threads of their own. This is a multi-step action in the code, both at time of development and at run time. Application defined threads are given numeric identifiers by Zephyr at run time; only "main line" code of the app gets a named thread by default. Developers however may optionally call a _Zephyr_API_ to give additional threads human meaningful names.
^ Anatomy of a Zephyr thread
To create a Zephyr thread in an app involves declarations of certain thread elements, a call to Zephyr's thread_create API function, and at minimum an entry point function for the given thread. The following pieces:
- thread data structure
- thread priority
- thread stack size
- call to Zephyr "create thread" API
- thread entry point function
In English language, creation of a Zephyr application thread in simple terms involves the steps:
(1) create a thread's data structure of type `struct k_thread`
(2) pound define given thread's priority
(3) pound define given thread's static memory stack size in bytes
(4) call Zephyr's thread creation API function with its required ten parameters
(5) define an entry point function for the thread, which other application code most likely to call to start thread execution
A bit of repetition: Zephyr thread's data structure, priority and stack size are programmed in a declarative way. Stack size and thread data are typically declared as follows, the first of these involves a function like macro:
K_THREAD_STACK_DEFINE(thread_led_stack_area, THREAD_LED_STACK_SIZE); struct k_thread thread_led_data;
Interestingly, the stack area parameter first parameter to macro like function is defined in the above macro, and then referenced typically only one or two times, in the call to Zephyr's `k_thread_create()` API.
The call to `thread_create` <- REVIEW looks like this:
int initialize_thread_led(void) { int rstatus = 0; k_tid_t task_led_tid = k_thread_create(&thread_led_data, thread_led_stack_area, K_THREAD_STACK_SIZEOF(thread_led_stack_area), thread_led_entry_point, NULL, NULL, NULL, THREAD_LED_PRIORITY, 0, K_MSEC(1000)); // K_NO_WAIT); // REF https://docs.zephyrproject.org/2.6.0/reference/kernel/threads/index.html?highlight=k_thread_create#c.k_thread_name_set // int k_thread_name_set(k_tid_t thread, const char *str) rstatus = k_thread_name_set(task_led_tid, MODULE_ID__THREAD_LED); if ( rstatus == 0 ) { } // avoid compiler warning about unused variable - TMH return (int)task_led_tid; }
^ Thread API references
^ Zephyr Hardware Abstraction Layer
Zephyr RTOS 3.4.0 project as of 2024 Q1 supports several MCU architectures plus an array of development boards. Part of the framework for this support is a hardware abstraction layer, many of whose manifesting files are found along side Zephyr RTOS source tree in `$workspace/modules`. The path `$workspace/modules/hal` contains vendor provided hardware libraries.
An example of a Zephyr HAL is `$workspace/modules/hal/nxp`. This said, for Zephyr's supported in tree drivers there are sources which appear in Zephyr source tree itself to inform Zephyr about certain details of vendor HALs.
An example of source tree HAL references is the set of device tree source bindings files which live in `$workspace/zephyr/dts/bindings`. As a case in point details for NXP's LPC family MCU pin and peripheral configurations are expressed in the file:
$workspace/zephyr/dts/bindings/pinctrl/nxp,lpc-iocon-pinctrl.yaml
^ Zephyr Drivers
Zephyr RTOS has a notion of in-tree drivers and out-of-tree drivers. The following local page section begin to cover these topics with a few references to specific peripheral drivers.
^ In-tree drivers
Early in contributor Ted's learning and use of Zephyr RTOS, "in tree" drivers and "out of tree" drivers concepts seemed distinct and referent to code bases which were completely located within Zephyr's source tree, and the latter located outside of the Zephyr RTOS source tree, respectively. This is not a correct view. Rather, Zephyr's "in tree" drivers have code in the Zephyr source tree which nearly always calls third party MCU library code bases.
Out-out-tree drivers in contrast have their code bases entirely outside of Zephyr RTOS source tree.
Zephyr RTOS code provides standardized structures to support peripherals and devices. These structures nearly always involve function pointers which ultimately point to a given target MCU or MCU family's driver library or Software Development Kit (SDK) code base.
Zephyr RTOS code base makes heavy use of function pointers and of macros. These pointers and macros can make it a challenge to locate a function definiton during static code analysis. The `ctags` utility does not always have a clear chain of symbols to follow to get to the ultimate definition of a function.
This NN article section on Zephyr in-tree drivers touches on this matter. The subsection here on SPI NOR FLASH touches on this challenge, and looks at one way of following list files from build artifacts, to get closer to driver function definitions.
^ ADC
Zephyr RTOS' in-tree Analog to Digital Converter (ADC) driver documentation point:
^ UART
Zephyr in-tree UART driver supports polled, interrupt driven, and DMA based UART configurations. First link to section of Zephyr 3.3.0 documentation on UART in-tree driver:
We are curious to see what support NXP offers in Zephyr 3.3.0 for UART DMA configuration. Here are a couple of blog posts which may shed some light on needed code, and which code we need look for:
On the topic of Zephyr UART via DMA support:
-
A Nordic ncs sample application, likely good for any architecture independent Kconfig symbols:
Some mention of supported boards and dts overlay files in this testing yaml file:
^ SPI
One specific SPI in-tree driver in Zephyr 3.4.0 is spi_mcux_flexcomm.c.
The way Zephyr's spi.h and spi_mcux_flexcomm.c files connect to one another in terms of function calls is not clear, but there is a connection.
Using `ctags` to follow symbol and entity definitions seems to give an implied cyclic definition of `spi_transceive_dt()` API call in a Zephyr based app. API spi_transceive_dt() maps to spi_transceive() per ctags results run in the Zephyr source tree. But the function definition ultimately points to a reference to api->transceive. Somehow, perhaps, the nature of the structures and pointers used to define a generalized interface which maps to a particular target MCU family makes it not possible for ctags to traverse the application side SPI transceive call to its ultimate definition.
The seeming circular definition point appears here in https://github.com/zephyrproject-rtos/zephyr/blob/main/include/zephyr/drivers/spi.h#L761.
Zephyr's mandatory SPI driver API is called out in https://github.com/zephyrproject-rtos/zephyr/blob/main/include/zephyr/drivers/spi.h#L647.
NXP / Freescale driver contribution spi_mcux_flexcomm.c provides this API here:
This excerpt is a snapshot of Zephyr 3.4.0, commit 356c8cbe63ae. While it will change over time it is worth looking at here:
661 662 static int transceive(const struct device *dev, 663 const struct spi_config *spi_cfg, 664 const struct spi_buf_set *tx_bufs, 665 const struct spi_buf_set *rx_bufs, 666 bool asynchronous, 667 spi_callback_t cb, 668 void *userdata) 669 { 670 struct spi_mcux_data *data = dev->data; 671 int ret; 672 673 spi_context_lock(&data->ctx, asynchronous, cb, userdata, spi_cfg); 674 675 ret = spi_mcux_configure(dev, spi_cfg); 676 if (ret) { 677 goto out; 678 } 679 680 spi_context_buffers_setup(&data->ctx, tx_bufs, rx_bufs, 1); 681 682 spi_context_cs_control(&data->ctx, true); 683 684 spi_mcux_transfer_next_packet(dev); 685 686 ret = spi_context_wait_for_completion(&data->ctx); 687 out: 688 spi_context_release(&data->ctx, ret); 689 690 return ret; 691 } 692 693 static int spi_mcux_transceive(const struct device *dev, 694 const struct spi_config *spi_cfg, 695 const struct spi_buf_set *tx_bufs, 696 const struct spi_buf_set *rx_bufs) 697 { 698 #ifdef CONFIG_SPI_MCUX_FLEXCOMM_DMA 699 return transceive_dma(dev, spi_cfg, tx_bufs, rx_bufs, false, NULL, NULL); 700 #endif 701 return transceive(dev, spi_cfg, tx_bufs, rx_bufs, false, NULL, NULL); 702 } 703
Also worth noting:
./mcux/mcux-sdk/drivers/spi/fsl_spi.c:1052:status_t SPI_MasterTransferNonBlocking(SPI_Type *base, spi_master_handle_t *handle, spi_transfer_t *xfer
An subtle interesting facet of Zephyr in-tree SPI driver is its code to deal with TX and RX buffers whose lengths differ. See code at and around:
https://github.com/zephyrproject-rtos/zephyr/blob/main/drivers/spi/spi_mcux_flexcomm.c#L106
- Recent code contributions to Zephyr SPI drivers -
A recent contributor issue to Zephyr in-tree driver code:
^ SPI NOR FLASH
Zephyr provides a SPI NOR FLASH driver API, which depending upon a developer's target MCU points to one or another third part SDK or driver code. When working working with Zephyr 3.4.0 and a Macronix FLASH memory device, `ctags` isn't sufficient to locate the flash erase function definition. A different search is needed.
Search steps:
Step (1) Begin with pattern match on project build artifact `build/cpu0/zephyr.lst`:
Excerpt:
25552 25553 10012f36 <flash_erase>: 25554 rc = api->erase(dev, offset, size); 25555 10012f36: 6883 ldr r3, [r0, #8] 25556 10012f38: 689b ldr r3, [r3, #8] 25557 10012f3a: 4718 bx r3 25558
There are other instances of routine name `flash_erase` but the clue in this excerpt is the data structure `api`. A search in . . .
Step (2) Use ctags in app source file on API call `flash_erase`. This goes to file `"$workspace/zephyr/include/zephyr/drivers/espi.h"`. In this file find:
458 __subsystem struct espi_driver_api { 459 espi_api_config config; 460 espi_api_get_channel_status get_channel_status; 461 espi_api_read_request read_request; 462 espi_api_write_request write_request; 463 espi_api_lpc_read_request read_lpc_request; 464 espi_api_lpc_write_request write_lpc_request; 465 espi_api_send_vwire send_vwire; 466 espi_api_receive_vwire receive_vwire; 467 espi_api_send_oob send_oob; 468 espi_api_receive_oob receive_oob; 469 espi_api_flash_read flash_read; 470 espi_api_flash_write flash_write; 471 espi_api_flash_erase flash_erase; 472 espi_api_manage_callback manage_callback; 473 };
Step (3) In zephyr.lst file use `ctags` to navigate to file `$workspace/zephyr/drivers/flash/soc_flash_nrf.c`. Here find:
482 static int erase(uint32_t addr, uint32_t size) 483 { 484 struct flash_context context = { 485 .flash_addr = addr, 486 .len = size, 487 #ifndef CONFIG_SOC_FLASH_NRF_RADIO_SYNC_NONE 488 .enable_time_limit = 0, /* disable time limit */ 489 #endif /* !CONFIG_SOC_FLASH_NRF_RADIO_SYNC_NONE */ 490 #if defined(CONFIG_SOC_FLASH_NRF_PARTIAL_ERASE) 491 .flash_addr_next = addr 492 #endif 493 }; 494 495 return erase_op(&context); 496 }
Given this it looks like step (2) did not directly help us reach the definition of `flash_erase()` API. The apparent routine referenced on line 495 above has in same file the definition:
328 static int erase_op(void *context) 329 { 330 uint32_t pg_size = nrfx_nvmc_flash_page_size_get(); 331 struct flash_context *e_ctx = context; 332 333 #ifndef CONFIG_SOC_FLASH_NRF_RADIO_SYNC_NONE 334 uint32_t i = 0U; 335 336 if (e_ctx->enable_time_limit) { 337 nrf_flash_sync_get_timestamp_begin(); 338 } 339 #endif /* !CONFIG_SOC_FLASH_NRF_RADIO_SYNC_NONE */ 340 341 #ifdef CONFIG_SOC_FLASH_NRF_UICR 342 if (e_ctx->flash_addr == (off_t)NRF_UICR) { 343 if (SUSPEND_POFWARN()) { 344 return -ECANCELED; 345 } 346 347 (void)nrfx_nvmc_uicr_erase(); 348 RESUME_POFWARN(); 349 return FLASH_OP_DONE; 350 } 351 #endif 352 353 do { 354 if (SUSPEND_POFWARN()) { 355 return -ECANCELED; 356 } 357 358 #if defined(CONFIG_SOC_FLASH_NRF_PARTIAL_ERASE) 359 if (e_ctx->flash_addr == e_ctx->flash_addr_next) { 360 nrfx_nvmc_page_partial_erase_init(e_ctx->flash_addr, 361 CONFIG_SOC_FLASH_NRF_PARTIAL_ERASE_MS); 362 e_ctx->flash_addr_next += pg_size; 363 } 364 365 if (nrfx_nvmc_page_partial_erase_continue()) { 366 e_ctx->len -= pg_size; 367 e_ctx->flash_addr += pg_size; 368 } 369 #else 370 (void)nrfx_nvmc_page_erase(e_ctx->flash_addr); 371 e_ctx->len -= pg_size; 372 e_ctx->flash_addr += pg_size; 373 #endif /* CONFIG_SOC_FLASH_NRF_PARTIAL_ERASE */ 374 375 RESUME_POFWARN(); 376 377 #ifndef CONFIG_SOC_FLASH_NRF_RADIO_SYNC_NONE 378 i++; 379 380 if (e_ctx->enable_time_limit) { 381 if (nrf_flash_sync_check_time_limit(i)) { 382 break; 383 } 384 385 } 386 #endif /* !CONFIG_SOC_FLASH_NRF_RADIO_SYNC_NONE */ 387 388 } while (e_ctx->len > 0); 389 390 return (e_ctx->len > 0) ? FLASH_OP_ONGOING : FLASH_OP_DONE; 391 }
^ Driver Code Factoring
In-tree Zephyr drivers exist as part of Zephyr's source code tree, in other words, the code repo that entails Zephyr RTOS itself. Out-of-tree drivers written to cooperate with Zephyr exist outside of Zephyr's source code tree and are often their own independent code projects. See however this local page's In-tree drivers section for greater detail on driver code factoring in Zephyr RTOS.
Example Zephyr app and supporting code base which entails an out-of-tree driver factored along side app code:
^ Out-of-tree drivers
Stub section for Zephyr out-of-tree drivers.
^ Zephyr Work Queues
Zephyr RTOS provides an RTOS feature called a work queue, which developers may define and use. Zephyr also provides a system work queue. The following Golioth blog post talks a bit about these:
- https://blog.golioth.io/zephyr-threads-work-queues-message-queues-and-how-we-use-them/ Zephyr threads versus work queues
^ Zephyr RTOS Logger and Backend Features
Zephyr has five log levels ranging from LOG_LEVEL_NONE to LOG_LEVEL_DBG. These are defined in log_core.h.
Printing of log messages and other strings can be deferred, and there is a mechanism in place in Zephyr RTOS project called `cbprintf packages` to help in this:
It is also possible to reduce code size in a Zephyr app by avoiding libc library functions in the s*printf() family:
Following links need updating to Zephyr 3.4.0:
- https://docs.zephyrproject.org/2.6.0/reference/logging/index.html#logger-backend-interface
- https://docs.zephyrproject.org/2.6.0/reference/logging/index.html#default-frontend
Zephyr's thread_analyzer when not configured to dump reports to printk() instead sends thread data to LOG_INF(). Excerpt from Zephyr file `./zephyr/subsys/debug/thread_analyzer.c`:
11 #include <kernel.h> 12 #include <debug/thread_analyzer.h> 13 #include <debug/stack.h> 14 #include <kernel.h> 15 #include <logging/log.h> 16 #include <stdio.h> 17 18 LOG_MODULE_REGISTER(thread_analyzer, CONFIG_THREAD_ANALYZER_LOG_LEVEL); 19 20 #if IS_ENABLED(CONFIG_THREAD_ANALYZER_USE_PRINTK) 21 #define THREAD_ANALYZER_PRINT(...) printk(__VA_ARGS__) 22 #define THREAD_ANALYZER_FMT(str) str "\n" 23 #define THREAD_ANALYZER_VSTR(str) (str) 24 #else 25 #define THREAD_ANALYZER_PRINT(...) LOG_INF(__VA_ARGS__) 26 #define THREAD_ANALYZER_FMT(str) str 27 #define THREAD_ANALYZER_VSTR(str) log_strdup(str) 28 #endif
Zephyr macro `LOG_INF` is in turn defined:
ted@localhost:~/projects/zephyr-based/z4-sandbox-kionix-work/zephyr$ grep -nr LOG_INF ./* | grep define ./include/logging/log.h:61:#define LOG_INF(...) Z_LOG(LOG_LEVEL_INF, __VA_ARGS__)
Define of Z_LOG:
ted@localhost:~/projects/zephyr-based/z4-sandbox-kionix-work/zephyr$ grep -nr 'Z_LOG[^_]' ./* ./include/logging/log_core.h:295:#define Z_LOG2(_level, _source, _dsource, ...) do { \ ./include/logging/log_core.h:329:#define Z_LOG(_level, ...) \
$ grep -nr 'Z_LOG[^_]' ./* $ vi ./include/logging/log_core.h 329 #define Z_LOG(_level, ...) \ 330 Z_LOG2(_level, __log_current_const_data, __log_current_dynamic_data, __VA_ARGS__)
A Zephyr RTOS sample to check out:
`ted@localhost:~/projects/zephyr-based/z4-sandbox-kionix-work/zephyr/samples/subsys/logging/logger$`
- 2022-10-05 -
Zephyr v2.6.0 source file `log_core.h` may also define `log_strdup()`:
./subsys/logging/log_core.c:1017:char *z_log_strdup(const char *str)
But it looks like the routine is statically in-lined in Zephyr header file `log.h`:
./include/logging/log.h:290:static inline char *log_strdup(const char *str)
^ Zephyr thread_analyzer_cb And Related
Zephyr 2.6.0's thread_analyzer_cb() routine gathers and populates a structure with all pertinent, reported run-time thread statistics. This is however a static routine so we cannot call it directly. Nor does it return the summary thread resource use data to its direct calling routine. There is yet hope, and to understand a path forward here excerpted is the definition of this important thread reporting routine from file `./zephyr/subsys/debug/thread_analyzer.c`:
59 static void thread_analyze_cb(const struct k_thread *cthread, void *user_data) 60 { 61 struct k_thread *thread = (struct k_thread *)cthread; 62 #ifdef CONFIG_THREAD_RUNTIME_STATS 63 k_thread_runtime_stats_t rt_stats_all; 64 k_thread_runtime_stats_t rt_stats_thread; 65 int ret; 66 #endif 67 size_t size = thread->stack_info.size; 68 thread_analyzer_cb cb = user_data; 69 struct thread_analyzer_info info; 70 char hexname[PTR_STR_MAXLEN + 1]; 71 const char *name; 72 size_t unused; 73 int err; 74 75 76 77 name = k_thread_name_get((k_tid_t)thread); 78 if (!name || name[0] == '\0') { 79 name = hexname; 80 snprintk(hexname, sizeof(hexname), "%p", (void *)thread); 81 } 82 83 err = k_thread_stack_space_get(thread, &unused); 84 if (err) { 85 THREAD_ANALYZER_PRINT( 86 THREAD_ANALYZER_FMT( 87 " %-20s: unable to get stack space (%d)"), 88 name, err); 88 name, err); 89 90 unused = 0; 91 } 92 93 info.name = name; 94 info.stack_size = size; 95 info.stack_used = size - unused; 96 97 #ifdef CONFIG_THREAD_RUNTIME_STATS 98 ret = 0; 99 100 if (k_thread_runtime_stats_get(thread, &rt_stats_thread) != 0) { 101 ret++; 102 } 103 104 if (k_thread_runtime_stats_all_get(&rt_stats_all) != 0) { 105 ret++; 106 } 107 if (ret == 0) { 108 info.utilization = (rt_stats_thread.execution_cycles * 100U) / 109 rt_stats_all.execution_cycles; 110 } 111 #endif 112 cb(&info); 113 }
The routine which calls this is:
115 void thread_analyzer_run(thread_analyzer_cb cb) 116 { 117 if (IS_ENABLED(CONFIG_THREAD_ANALYZER_RUN_UNLOCKED)) { 118 k_thread_foreach_unlocked(thread_analyze_cb, cb); 119 } else { 120 k_thread_foreach(thread_analyze_cb, cb); 121 } 122 }
We should be able to directly call thread_analyzer_run(thread_analyzer_cb cb)
. If yes, and if we can learn how &info is defined, we should be able to redirect thread_analyzer reports to an arbitrary UART . . .
^ struct thread_analyzer_info
ted@localhost:~/projects/zephyr-based/z4-sandbox-kionix-work/zephyr$ grep -nr 'struct thread_analyzer_info' ./* ./include/debug/thre[[#top|^]] ad_analyzer.h:23:struct thread_analyzer_info { ./include/debug/thread_analyzer.h:44:typedef void (*thread_analyzer_cb)(struct thread_analyzer_info *info);
Structure definition for &info passed to thread_analyzer_cb() function above:
/** @defgroup thread_analyzer Thread analyzer * @brief Module for analyzing threads * * This module implements functions and the configuration that simplifies * thread analysis. * @{ */ struct thread_analyzer_info { /** The name of the thread or stringified address of the thread handle * if name is not set. */ const char *name; /** The total size of the stack*/ size_t stack_size; /** Stack size in used */ size_t stack_used; #ifdef CONFIG_THREAD_RUNTIME_STATS unsigned int utilization; #endif };
^ Zephyr Shell
This section started 2023-06-12. Growth goal at this time is to learn how Zephyr shell supports multiple shell sessions, across one or many interfaces.
First reference noting here relates to develop needing to separate shell communications from Zephyr logging. Not quite multi-session / multi-context shell support but somewhere in the ballpark:
This link asks salient questions to topic of multiple shell session support in Zephyr shell facility:
- https://devzone.nordicsemi.com/f/nordic-q-a/92193/multiple-zephyr-shell-instances-over-uart
- https://devzone.nordicsemi.com/f/nordic-q-a/92325/use-shell-over-uart-and-uart-async-api-on-nrf52840
----- ----- ----- ----- ----- ----- ----- ----- -----
^ Kernel Services
Zephyr stack sentinel
Zephyr Memory Management
^ Zephyr Kernel Timing
System timing, timing of hardware and firmware running together is an important non-trivial feature of nearly all embedded systems. In RTOS situations the operating system must expend some resources to keep track of a system timing mechanism with a time granularity of n microseconds. The given system may use another unit of time but the concept remains the same. Local article contributor Ted believes this applies for operating systems (and bare metal projects employing a timing mechanism) where there's a "system tick" or "systick" implemented.
Zephyr provides an API to permit firmware at runtime to obtain systick time granularity:
// A Zephyr API to obtain MCU clock cycles per second: // sys_clock_hw_cycles_per_sec()
Program uptime or "running hours" may also be of import to a given project. Following link at Zephyr Project documentation pages has valuable info on various Zephyr timing and work queueing facilities:
-
Similar documentation but describes sources of timing inaccuracies in running Zephyr RTOS relative to civil, real world time:
Must #include <kernel.h> . . .
Related:
Important to follow up on this bug report relating to above mentioned Zephyr timing features:
2023-12-06
In Zephyr a k_timeout_t is a structure.
^ Zephyr Application Code Relocation