pub struct AddressSpace { /* private fields */ }Expand description
A per-process address space backed by a PML4 page table.
Kernel tasks share a single AddressSpace (the kernel AS).
User tasks each get their own, with kernel entries (PML4[256..512]) cloned
so that the kernel is always mapped regardless of which AS is active.
Implementations§
Source§impl AddressSpace
impl AddressSpace
Sourcepub unsafe fn new_kernel() -> Self
pub unsafe fn new_kernel() -> Self
Create the kernel address space by wrapping the current (boot) CR3.
§Safety
Must be called exactly once, during single-threaded init, after paging is initialized.
Sourcepub fn new_user() -> Result<Self, &'static str>
pub fn new_user() -> Result<Self, &'static str>
Create a new user address space with the kernel half cloned.
Allocates a fresh PML4 frame, zeroes it, then copies entries 256..512 from the kernel PML4. This shares the kernel’s L3/L2/L1 subtrees so kernel mapping changes propagate automatically.
Sourcepub fn reserve_region(
&self,
start: u64,
page_count: usize,
flags: VmaFlags,
vma_type: VmaType,
page_size: VmaPageSize,
) -> Result<(), &'static str>
pub fn reserve_region( &self, start: u64, page_count: usize, flags: VmaFlags, vma_type: VmaType, page_size: VmaPageSize, ) -> Result<(), &'static str>
Reserve a contiguous region of virtual pages without allocating physical frames.
The pages will be mapped lazily during page faults (Demand Paging).
Sourcepub fn handle_fault(&self, fault_addr: u64) -> Result<(), &'static str>
pub fn handle_fault(&self, fault_addr: u64) -> Result<(), &'static str>
Handle a page fault by checking if the address falls within a reserved VMA.
If it does, allocates a physical frame and maps it.
Sourcepub fn map_region(
&self,
start: u64,
page_count: usize,
flags: VmaFlags,
vma_type: VmaType,
page_size: VmaPageSize,
) -> Result<(), &'static str>
pub fn map_region( &self, start: u64, page_count: usize, flags: VmaFlags, vma_type: VmaType, page_size: VmaPageSize, ) -> Result<(), &'static str>
Map a contiguous region of pages backed by newly allocated physical frames.
Frames are allocated from the buddy allocator and zero-filled. The region is tracked in the VMA list.
Maps shared frames.
Sourcepub fn unmap_region(
&self,
start: u64,
page_count: usize,
page_size: VmaPageSize,
) -> Result<(), &'static str>
pub fn unmap_region( &self, start: u64, page_count: usize, page_size: VmaPageSize, ) -> Result<(), &'static str>
Unmap a previously mapped region and free the backing frames.
Sourcepub fn find_free_vma_range(
&self,
hint: u64,
n_pages: usize,
page_size: VmaPageSize,
) -> Option<u64>
pub fn find_free_vma_range( &self, hint: u64, n_pages: usize, page_size: VmaPageSize, ) -> Option<u64>
Find a free virtual address range of n_pages pages of page_size starting at or after hint.
Sourcepub fn has_mapping_in_range(&self, addr: u64, len: u64) -> bool
pub fn has_mapping_in_range(&self, addr: u64, len: u64) -> bool
Return true if any tracked VMA overlaps [addr, addr + len).
Sourcepub fn region_by_start(&self, start: u64) -> Option<VirtualMemoryRegion>
pub fn region_by_start(&self, start: u64) -> Option<VirtualMemoryRegion>
Return the tracked VMA that starts exactly at start.
Sourcepub fn any_mapped_in_range(
&self,
addr: u64,
len: u64,
page_size: VmaPageSize,
) -> Result<bool, &'static str>
pub fn any_mapped_in_range( &self, addr: u64, len: u64, page_size: VmaPageSize, ) -> Result<bool, &'static str>
Returns true if any page in [addr, addr + len) is currently mapped.
Sourcepub fn protect_range(
&self,
addr: u64,
len: u64,
flags: VmaFlags,
) -> Result<(), &'static str>
pub fn protect_range( &self, addr: u64, len: u64, flags: VmaFlags, ) -> Result<(), &'static str>
Performs the protect range operation.
Sourcepub fn translate(&self, vaddr: VirtAddr) -> Option<PhysAddr>
pub fn translate(&self, vaddr: VirtAddr) -> Option<PhysAddr>
Translate a virtual address to its mapped physical address.
Sourcepub unsafe fn switch_to(&self)
pub unsafe fn switch_to(&self)
Switch the CPU to this address space by writing CR3.
Skips the write if CR3 already points to this address space (avoids unnecessary TLB flush).
§Safety
The caller must ensure this address space’s page tables are valid and that the kernel half is correctly mapped.
Sourcepub fn has_user_mappings(&self) -> bool
pub fn has_user_mappings(&self) -> bool
Check if this address space has any user-space memory mappings.
Sourcepub fn unmap_all_user_regions(&self)
pub fn unmap_all_user_regions(&self)
Unmap all tracked user regions (best-effort).
This frees user frames and clears the VMA list. Kernel mappings are untouched. Does not allocate memory.
Trait Implementations§
impl Send for AddressSpace
impl Sync for AddressSpace
Auto Trait Implementations§
impl !Freeze for AddressSpace
impl !RefUnwindSafe for AddressSpace
impl Unpin for AddressSpace
impl UnsafeUnpin for AddressSpace
impl UnwindSafe for AddressSpace
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more