#[repr(C)]pub struct Task {
pub magic: u64,
pub pid: u32,
pub kernel_stack: u64,
pub mm: VMMan,
pub state: TaskState,
pub context: Context64,
}
Expand description
currently only kernelSp and Context are important. the task struct will be placed on the starting addr (low addr) of the kernel stack. therefore we can retrive the task struct at anytime by masking the kernel stack NOTE: we assume all fields in Task are only modified by the task itself, i.e. no task should modify another task’s state. (this may change though, in which case we will need some atomics) TODO: the mm is heap allocated object (vec of vmas). But the task struct doesn’t have a lifetime. Must cleanup the memory used by the mm itself when exiting a task.
Fields§
§magic: u64
§pid: u32
§kernel_stack: u64
note that this points to the stack bottom (low addr)
mm: VMMan
§state: TaskState
§context: Context64
Implementations§
Source§impl Task
impl Task
Sourceunsafe fn settle_on_stack<'a>(stack_addr: u64, t: Task) -> &'a mut Task
unsafe fn settle_on_stack<'a>(stack_addr: u64, t: Task) -> &'a mut Task
unsafe because you have to make sure the stack pointer is valid i.e. allocated through KStackAllocator.
Sourcefn prepare_context(&mut self, entry: u64)
fn prepare_context(&mut self, entry: u64)
settle_on_stack and prepare_context must be called before switching to the task. TODO: combine them into one single API
Sourcefn get_init_kernel_sp(&self) -> u64
fn get_init_kernel_sp(&self) -> u64
get kernel stack top (high addr) to initialize the new task Note that there are often alignment requirements of stack pointer. We do 8 bytes here
Sourcepub fn current<'a>() -> Option<&'a mut Task>
pub fn current<'a>() -> Option<&'a mut Task>
return a reference of the current running task struct. Return none of the magic number is currupted on the kernel stack, this is because
- the task struct is not currectly put on the stack
- trying to get the current of the initial task, who has no task struct on the stack
- the stack is corrupted (due to e.g. stack overflow)
TODO add a canary also at the end of the task struct and check it.
pub fn taskid(&self) -> TaskId
Sourcepub unsafe fn curr_wait_in(wait_room: &mut VecDeque<TaskId>)
pub unsafe fn curr_wait_in(wait_room: &mut VecDeque<TaskId>)
a task may be present in multiple wait rooms; this is logically not possible at the moment, but would be necessary for stuffs like EPoll. require manual attention for sync
Sourcepub unsafe fn wakeup(&mut self)
pub unsafe fn wakeup(&mut self)
does not lock the GLOBAL_SCHEDULER, the caller is responsible of doing that, e.g. call task.wakeup() from epilogue
pub fn nanosleep(&mut self, ns: u64)
Sourcepub fn create_task(pid: u32, entry: u64) -> TaskId
pub fn create_task(pid: u32, entry: u64) -> TaskId
create a kernel thread, you need to add it to the scheduler run queue manually