Some objects declare their handle type as const, while others declare it
as constexpr. This makes the const ones constexpr for consistency, and
prevent unexpected compilation errors if these happen to be attempted to be
used within a constexpr context.
We need to ensure dynarmic gets a valid pointer if the page table is
resized (the relevant pointers would be invalidated in this scenario).
In this scenario, the page table can be resized depending on what kind
of address space is specified within the NPDM metadata (if it's
present).
Adjusts the interface of the wrappers to take a system reference, which
allows accessing a system instance without using the global accessors.
This also allows getting rid of all global accessors within the
supervisor call handling code. While this does make the wrappers
themselves slightly more noisy, this will be further cleaned up in a
follow-up. This eliminates the global system accessors in the current
code while preserving the existing interface.
Keeps the return type consistent with the function name. While we're at
it, we can also reduce the amount of boilerplate involved with handling
these by using structured bindings.
We need to be checking whether or not the given address is within the
kernel address space or if the given address isn't word-aligned and bail
in these scenarios instead of trashing any kernel state.
Given server sessions can be given a name, we should allow retrieving
it instead of using the default implementation of GetName(), which would
just return "[UNKNOWN KERNEL OBJECT]".
The AddressArbiter type isn't actually used, given the arbiter itself
isn't a direct kernel object (or object that implements the wait object
facilities).
Given this, we can remove the enum entry entirely.
Similarly like svcGetProcessList, this retrieves the list of threads
from the current process. In the kernel itself, a process instance
maintains a list of threads, which are used within this function.
Threads are registered to a process' thread list at thread
initialization, and unregistered from the list upon thread destruction
(if said thread has a non-null owning process).
We assert on the debug event case, as we currently don't implement
kernel debug objects.
Now that ShouldWait() is a const qualified member function, this one can
be made const qualified as well, since it can handle passing a const
qualified this pointer to ShouldWait().
Previously this was performing a u64 + int sign conversion. When dealing
with addresses, we should generally be keeping the arithmetic in the
same signedness type.
This also gets rid of the static lifetime of the constant, as there's no
need to make a trivial type like this potentially live for the entire
duration of the program.
This doesn't really provide any benefit to the resource limit interface.
There's no way for callers to any of the service functions for resource
limits to provide a custom name, so all created instances of resource
limits other than the system resource limit would have a name of
"Unknown".
The system resource limit itself is already trivially identifiable from
its limit values, so there's no real need to take up space in the object to
identify one object meaningfully out of N total objects.
Since C++17, the introduction of deduction guides for locking facilities
means that we no longer need to hardcode the mutex type into the locks
themselves, making it easier to switch mutex types, should it ever be
necessary in the future.
Since C++17, we no longer need to explicitly specify the type of the
mutex within the lock_guard. The type system can now deduce these with
deduction guides.
The kernel makes sure that the given size to unmap is always the same
size as the entire region managed by the shared memory instance,
otherwise it returns an error code signifying an invalid size.
This is similarly done for transfer memory (which we already check for).
Reports the (mostly) correct size through svcGetInfo now for queries to
total used physical memory. This still doesn't correctly handle memory
allocated via svcMapPhysicalMemory, however, we don't currently handle
that case anyways.
This will make operating with the process-related SVC commands much
nicer in the future (the parameter representing the stack size in
svcStartProcess is a 64-bit value).
In some cases, our callbacks were using s64 as a parameter, and in other
cases, they were using an int, which is inconsistent.
To make all callbacks consistent, we can just use an s64 as the type for
late cycles, given it gets rid of the need to cast internally.
While we're at it, also resolve some signed/unsigned conversions that
were occurring related to the callback registration.
One behavior that we weren't handling properly in our heap allocation
process was the ability for the heap to be shrunk down in size if a
larger size was previously requested.
This adds the basic behavior to do so and also gets rid of HeapFree, as
it's no longer necessary now that we have allocations and deallocations
going through the same API function.
While we're at it, fully document the behavior that this function
performs.
Makes it more obvious that this function is intending to stand in for
the actual supervisor call itself, and not acting as a general heap
allocation function.
Also the following change will merge the freeing behavior of HeapFree
into this function, so leaving it as HeapAllocate would be misleading.
In cases where HeapAllocate is called with the same size of the current
heap, we can simply do nothing and return successfully.
This avoids doing work where we otherwise don't have to. This is also
what the kernel itself does in this scenario.
Another holdover from citra that can be tossed out is the notion of the
heap needing to be allocated in different addresses. On the switch, the
base address of the heap will always be managed by the memory allocator
in the kernel, so this doesn't need to be specified in the function's
interface itself.
The heap on the switch is always allocated with read/write permissions,
so we don't need to add specifying the memory permissions as part of the
heap allocation itself either.
This also corrects the error code returned from within the function.
If the size of the heap is larger than the entire heap region, then the
kernel will report an out of memory condition.
The use of a shared_ptr is an implementation detail of the VMManager
itself when mapping memory. Because of that, we shouldn't require all
users of the CodeSet to have to allocate the shared_ptr ahead of time.
It's intended that CodeSet simply pass in the required direct data, and
that the memory manager takes care of it from that point on.
This means we just do the shared pointer allocation in a single place,
when loading modules, as opposed to in each loader.
Makes it more evident that one is for actual code and one is for actual
data. Mutable and static are less than ideal terms here, because
read-only data is technically not mutable, but we were mapping it with
that label.
Given this is utilized by the loaders, this allows avoiding inclusion of
the kernel process definitions where avoidable.
This also keeps the loading format for all executable data separate from
the kernel objects.
Rather than make a global accessor for this sort of thing. We can make
it a part of the thread interface itself. This allows getting rid of a
hidden global accessor in the kernel code.
This condition was checking against the nominal thread priority, whereas
the kernel itself checks against the current priority instead. We were
also assigning the nominal priority, when we should be assigning
current_priority, which takes priority inheritance into account.
This can lead to the incorrect priority being assigned to a thread.
Given we recursively update the relevant threads, we don't need to go
through the whole mutex waiter list. This matches what the kernel does
as well (only accessing the first entry within the waiting list).
Makes it an instantiable class like it is in the actual kernel. This
will also allow removing reliance on global accessors in a following
change, now that we can encapsulate a reference to the system instance
in the class.
Within the kernel, shared memory and transfer memory facilities exist as
completely different kernel objects. They also have different validity
checking as well. Therefore, we shouldn't be treating the two as the
same kind of memory.
They also differ in terms of their behavioral aspect as well. Shared
memory is intended for sharing memory between processes, while transfer
memory is intended to be for transferring memory to other processes.
This breaks out the handling for transfer memory into its own class and
treats it as its own kernel object. This is also important when we
consider resource limits as well. Particularly because transfer memory
is limited by the resource limit value set for it.
While we currently don't handle resource limit testing against objects
yet (but we do allow setting them), this will make implementing that
behavior much easier in the future, as we don't need to distinguish
between shared memory and transfer memory allocations in the same place.
With this, all kernel objects finally have all of their data members
behind an interface, making it nicer to reason about interactions with
other code (as external code no longer has the freedom to totally alter
internals and potentially messing up invariants).
There's no real need to use a shared lifetime here, since we don't
actually expose them to anything else. This is also kind of an
unnecessary use of the heap given the objects themselves are so small;
small enough, in fact that changing over to optionals actually reduces
the overall size of the HLERequestContext struct (818 bytes to 808
bytes).
Now that we have the address arbiter extracted to its own class, we can
fix an innaccuracy with the kernel. Said inaccuracy being that there
isn't only one address arbiter. Each process instance contains its own
AddressArbiter instance in the actual kernel.
This fixes that and gets rid of another long-standing issue that could
arise when attempting to create more than one process.
Similar to how WaitForAddress was isolated to its own function, we can
also move the necessary conditional checking into the address arbiter
class itself, allowing us to hide the implementation details of it from
public use.
Rather than let the service call itself work out which function is the
proper one to call, we can make that a behavior of the arbiter itself,
so we don't need to directly expose those implementation details.
Places all of the functions for address arbiter operation into a class.
This will be necessary for future deglobalizing efforts related to both
the memory and system itself.
Removes a few inclusion dependencies from the headers or replaces
existing ones with ones that don't indirectly include the required
headers.
This allows removing an inclusion of core/memory.h, meaning that if the
memory header is ever changed in the future, it won't result in
rebuilding the entirety of the HLE services (as the IPC headers are used
quite ubiquitously throughout the HLE service implementations).
Avoids directly relying on the global system instance and instead makes
an arbitrary system instance an explicit dependency on construction.
This also allows removing dependencies on some global accessor functions
as well.
Given we already pass in a reference to the kernel that the shared
memory instance is created under, we can just use that to check the
current process, rather than using the global accessor functions.
This allows removing direct dependency on the system instance entirely.
The kernel allows restricting the total size of the handle table through
the process capability descriptors. Until now, this functionality wasn't
hooked up. With this, the process handle tables become properly restricted.
In the case of metadata-less executables, the handle table will assume
the maximum size is requested, preserving the behavior that existed
before these changes.
A fairly trivial change. Other sections of the codebase use nested
namespaces instead of separate namespaces here. This one must have just
been overlooked.
Gets rid of the largest set of mutable global state within the core.
This also paves a way for eliminating usages of GetInstance() on the
System class as a follow-up.
Note that no behavioral changes have been made, and this simply extracts
the functionality into a class. This also has the benefit of making
dependencies on the core timing functionality explicit within the
relevant interfaces.
Places all of the timing-related functionality under the existing Core
namespace to keep things consistent, rather than having the timing
utilities sitting in its own completely separate namespace.
A holdover from citra, the Horizon kernel on the switch has no
prominent kernel object that functions as a timer. At least not
to the degree of sophistication that this class provided.
As such, this can be removed entirely. This class also wasn't used at
all in any meaningful way within the core, so this was just code sitting
around doing nothing. This also allows removing a few things from the
main KernelCore class that allows it to use slightly less resources
overall (though very minor and not anything really noticeable).
No inheritors of the WaitObject class actually make use of their own
implementations of these functions, so they can be made non-virtual.
It's also kind of sketchy to allow overriding how the threads get added
to the list anyways, given the kernel itself on the actual hardware
doesn't seem to customize based off this.
Looking into the implementation of the C++ standard facilities that seem
to be within all modules, it appears that they use 7 as a break reason
to indicate an uncaught C++ exception.
This was primarily found via the third last function called within
Horizon's equivalent of libcxxabi's demangling_terminate_handler(),
which passes the value 0x80000007 to svcBreak.
This is a bounds check to ensure that the thread priority is within the
valid range of 0-64. If it exceeds 64, that doesn't necessarily mean
that an actual priority of 64 was expected (it actually means whoever
called the function screwed up their math).
Instead clarify the message to indicate the allowed range of thread
priorities.
Now that we handle the kernel capability descriptors we can correct
CreateThread to properly check against the core and priority masks
like the actual kernel does.
This makes the naming more closely match its meaning. It's just a
preferred core, not a required default core. This also makes the usages
of this term consistent across the thread and process implementations.
This function isn't a general purpose function that should be exposed to
everything, given it's specific to initializing the main thread for a
Process instance.
Given that, it's a tad bit more sensible to place this within
process.cpp, which keeps it visible only to the code that actually needs
it.
In all cases that these functions are needed, the VMManager can just be
retrieved and used instead of providing the same functions in Process'
interface.
This also makes it a little nicer dependency-wise, since it gets rid of
cases where the VMManager interface was being used, and then switched
over to using the interface for a Process instance. Instead, it makes
all accesses uniform and uses the VMManager instance for all necessary
tasks.
All the basic memory mapping functions did was forward to the Process'
VMManager instance anyways.
Similar to the service capability flags, however, we currently don't
emulate the GIC, so this currently handles all interrupts as being valid
for the time being.
Handles the priority mask and core mask flags to allow building up the
masks to determine the usable thread priorities and cores for a kernel
process instance.
We've had the old kernel capability parser from Citra, however, this is
unused code and doesn't actually map to how the kernel on the Switch
does it. This introduces the basic functional skeleton for parsing
process capabilities.
If a thread handle is passed to svcGetProcessId, the kernel attempts to
access the process ID via the thread's instance's owning process.
Technically, this function should also be handling the kernel debug
objects as well, however we currently don't handle those kernel objects
yet, so I've left a note via a comment about it to remind myself when
implementing it in the future.
Starts the process ID counter off at 81, which is what the kernel itself
checks against internally when creating processes. It's actually
supposed to panic if the PID is less than 81 for a userland process.
Adds the barebones enumeration constants and functions in place to
handle memory attributes, while also essentially leaving the attribute
itself non-functional.
In the previous change, the memory writing was moved into the service
function itself, however it still had a problem, in that the entire
MemoryInfo structure wasn't being written out, only the first 32 bytes
of it were being written out. We still need to write out the trailing
two reference count members and zero out the padding bits.
Not doing this can result in wrong behavior in userland code in the following
scenario:
MemoryInfo info; // Put on the stack, not quaranteed to be zeroed out.
svcQueryMemory(&info, ...);
if (info.device_refcount == ...) // Whoops, uninitialized read.
This can also cause the wrong thing to happen if the user code uses
std::memcmp to compare the struct, with another one (questionable, but
allowed), as the padding bits are not guaranteed to be a deterministic
value. Note that the kernel itself also fully zeroes out the structure
before writing it out including the padding bits.
Moves the memory writes directly into QueryProcessMemory instead of
letting the wrapper function do it. It would be inaccurate to allow the
handler to do it because there's cases where memory shouldn't even be
written to. For example, if the given process handle is invalid.
HOWEVER, if the memory writing is within the wrapper, then we have no
control over if these memory writes occur, meaning in an error case, 68
bytes of memory randomly get trashed with zeroes, 64 of those being
written to wherever the memory info address points to, and the remaining
4 being written wherever the page info address points to.
One solution in this case would be to just conditionally check within
the handler itself, but this is kind of smelly, given the handler
shouldn't be performing conditional behavior itself, it's a behavior of
the managed function. In other words, if you remove the handler from the
equation entirely, does the function still retain its proper behavior?
In this case, no.
Now, we don't potentially trash memory from this function if an invalid
query is performed.
This would result in svcSetMemoryAttribute getting the wrong value for
its third parameter. This is currently fine, given the service function
is stubbed, however this will be unstubbed in a future change, so this
needs to change.
The kernel returns a memory info instance with the base address set to
the end of the address space, and the size of said block as
0 - address_space_end, it doesn't set both of said members to zero.
Gets the two structures out of an unrelated header and places them with
the rest of the memory management code.
This also corrects the structures. PageInfo appears to only contain a
32-bit flags member, and the extra padding word in MemoryInfo isn't
necessary.
Amends the MemoryState enum to use the same values like the actual
kernel does. Also provides the necessary operators to operate on them.
This will be necessary in the future for implementing
svcSetMemoryAttribute, as memory block state is checked before applying
the attribute.
The Process object kept itself alive indefinitely because its handle_table
contains a SharedMemory object which owns a reference to the same Process object,
creating a circular ownership scenario.
Break that up by storing only a non-owning pointer in the SharedMemory object.
This was only ever public so that code could check whether or not a
handle was valid or not. Instead of exposing the object directly and
allowing external code to potentially mess with the map contents, we
just provide a member function that allows checking whether or not a
handle is valid.
This makes all member variables of the VMManager class private except
for the page table.
While partially correct, this service call allows the retrieved event to
be null, as it also uses the same handle to check if it was referring to
a Process instance. The previous two changes put the necessary machinery
in place to allow for this, so we can simply call those member functions
here and be done with it.
Process instances can be waited upon for state changes. This is also
utilized by svcResetSignal, which will be modified in an upcoming
change. This simply puts all of the WaitObject related machinery in
place.
svcResetSignal relies on the event instance to have already been
signaled before attempting to reset it. If this isn't the case, then an
error code has to be returned.
This function simply does a handle table lookup for a writable event
instance identified by the given handle value. If a writable event
cannot be found for the given handle, then an invalid handle error is
returned. If a writable event is found, then it simply signals the
event, as one would expect.
svcCreateEvent operates by creating both a readable and writable event
and then attempts to add both to the current process' handle table.
If adding either of the events to the handle table fails, then the
relevant error from the handle table is returned.
If adding the readable event after the writable event to the table
fails, then the writable event is removed from the handle table and the
relevant error from the handle table is returned.
Note that since we do not currently test resource limits, we don't check
the resource limit table yet.
Two kernel object should absolutely never have the same handle ID type.
This can cause incorrect behavior when it comes to retrieving object
types from the handle table. In this case it allows converting a
WritableEvent into a ReadableEvent and vice-versa, which is undefined
behavior, since the object types are not the same.
This also corrects ClearEvent() to check both kernel types like the
kernel itself does.
The kernel uses the handle table of the current process to retrieve the
process that should be used to retrieve certain information. To someone
not familiar with the kernel, this might raise the question of "Ok,
sounds nice, but doesn't this make it impossible to retrieve information
about the current process?".
No, it doesn't, because HandleTable instances in the kernel have the
notion of a "pseudo-handle", where certain values allow the kernel to
lookup objects outside of a given handle table. Currently, there's only
a pseudo-handle for the current process (0xFFFF8001) and a pseudo-handle
for the current thread (0xFFFF8000), so to retrieve the current process,
one would just pass 0xFFFF8001 into svcGetInfo.
The lookup itself in the handle table would be something like:
template <typename T>
T* Lookup(Handle handle) {
if (handle == PSEUDO_HANDLE_CURRENT_PROCESS) {
return CurrentProcess();
}
if (handle == PSUEDO_HANDLE_CURRENT_THREAD) {
return CurrentThread();
}
return static_cast<T*>(&objects[handle]);
}
which, as is shown, allows accessing the current process or current
thread, even if those two objects aren't actually within the HandleTable
instance.
Our implementation of svcGetInfo was slightly incorrect in that we
weren't doing proper error checking everywhere. Instead, reorganize it
to be similar to how the kernel seems to do it.
More hardware accurate. On the actual system, there is a differentiation between the signaler and signalee, they form a client/server relationship much like ServerPort and ClientPort.
The opposite of the getter functions, this function sets the limit value
for a particular ResourceLimit resource category, with the restriction
that the new limit value must be equal to or greater than the current
resource value. If this is violated, then ERR_INVALID_STATE is returned.
e.g.
Assume:
current[Events] = 10;
limit[Events] = 20;
a call to this service function lowering the limit value to 10 would be
fine, however, attempting to lower it to 9 in this case would cause an
invalid state error.
This kernel service function is essentially the exact same as
svcGetResourceLimitLimitValue(), with the only difference being that it
retrieves the current value for a given resource category using the
provided resource limit handle, rather than retrieving the limiting
value of that resource limit instance.
Given these are exactly the same and only differ on returned values, we
can extract the existing code for svcGetResourceLimitLimitValue() to
handle both values.
This kernel service function retrieves the maximum allowable value for
a provided resource category for a given resource limit instance. Given
we already have the functionality added to the resource limit instance
itself, it's sufficient to just hook it up.
The error scenarios for this are:
1. If an invalid resource category type is provided, then ERR_INVALID_ENUM is returned.
2. If an invalid handle is provided, then ERR_INVALID_HANDLE is returned (bad thing goes in, bad thing goes out, as one would expect).
If neither of the above error cases occur, then the out parameter is
provided with the maximum limit value for the given category and success
is returned.
This function simply creates a ResourceLimit instance and attempts to
create a handle for it within the current process' handle table. If the
kernal fails to either create the ResourceLimit instance or create a
handle for the ResourceLimit instance, it returns a failure code
(OUT_OF_RESOURCE, and HANDLE_TABLE_FULL respectively). Finally, it exits
by providing the output parameter with the handle value for the
ResourceLimit instance and returning that it was successful.
Note: We do not return OUT_OF_RESOURCE because, if yuzu runs out of
available memory, then new will currently throw. We *could* allocate the
kernel instance with std::nothrow, however this would be inconsistent
with how all other kernel objects are currently allocated.
<random> isn't necesary directly within the header and can be placed in
the cpp file where its needed. Avoids propagating random generation
utilities via a header file.
Cleans out the citra/3DS-specific implementation details that don't
apply to the Switch. Sets the stage for implementing ResourceLimit
instances properly.
While we're at it, remove the erroneous checks within CreateThread() and
SetThreadPriority(). While these are indeed checked in some capacity,
they are not checked via a ResourceLimit instance.
In the process of moving out Citra-specifics, this also replaces the
system ResourceLimit instance's values with ones from the Switch.
Both member functions assume the passed in target process will not be
null. Instead of making this assumption implicit, we can change the
functions to be references and enforce this at the type-system level.
Makes the interface nicer to use in terms of 64-bit code, as it makes it
less likely for one to get truncation warnings (and also makes sense in
the context of the rest of the interface where 64-bit types are used for
sizes and offsets
Similar to PR 1706, which cleans up the error codes for the filesystem
code, but done for the kernel error codes. This removes the ErrCodes
namespace and specifies the errors directly. This also fixes up any
straggling lines of code that weren't using the named error codes where
applicable.
* svcBreak now dumps information from the debug buffer passed
info1 and info2 seem to somtimes hold an address to a buffer, this is usually 4 bytes or the size of the int and contains an error code. There's other circumstances where it can be something different so we hexdump these to examine them at a later date.
* Addressed comments
* get rid of boost::optional
* Remove optional references
* Use std::reference_wrapper for optional references
* Fix clang format
* Fix clang format part 2
* Adressed feedback
* Fix clang format and MacOS build
Now that we've gotten the innaccurate error codes out of the way, we can
finally toss away a bunch of these, trimming down the error codes to
ones that are actually used and knocking out two TODO comments.
All priority checks are supposed to occur before checking the validity
of the thread handle, we're also not supposed to return
ERR_NOT_AUTHORIZED here.
In the kernel, there isn't a singular handle table that everything gets
tossed into or used, rather, each process gets its own handle table that
it uses. This currently isn't an issue for us, since we only execute one
process at the moment, but we may as well get this out of the way so
it's not a headache later on.
This should be comparing against the queried process' vma_map, not the
current process'. The only reason this hasn't become an issue yet is we
currently only handle one process being active at any time.
Now that the changes clarifying the address spaces has been merged, we
can wrap the checks that the kernel performs when mapping shared memory
(and other forms of memory) into its own helper function and then use
those within MapSharedMemory and UnmapSharedMemory to complete the
sanitizing checks that are supposed to be done.
So, one thing that's puzzled me is why the kernel seemed to *not* use
the direct code address ranges in some cases for some service functions.
For example, in svcMapMemory, the full address space width is compared
against for validity, but for svcMapSharedMemory, it compares against
0xFFE00000, 0xFF8000000, and 0x7FF8000000 as upper bounds, and uses
either 0x200000 or 0x8000000 as the lower-bounds as the beginning of the
compared range. Coincidentally, these exact same values are also used in
svcGetInfo, and also when initializing the user address space, so this
is actually retrieving the ASLR extents, not the extents of the address
space in general.
This should help diagnose crashes easier and prevent many users thinking that a game is still running when in fact it's just an audio thread still running(this is typically not killed when svcBreak is hit since the game expects us to do this)
A fairly basic service function, which only appears to currently support
retrieving the process state. This also alters the ProcessStatus enum to
contain all of the values that a kernel process seems to be able of
reporting with regards to state.
These only exist to ferry data into a Process instance and end up going
out of scope quite early. Because of this, we can just make it a plain
struct for holding things and just std::move it into the relevant
function. There's no need to make this inherit from the kernel's Object
type.
Regular value initialization is adequate here for zeroing out data. It
also has the benefit of not invoking undefined behavior if a non-trivial
type is ever added to the struct for whatever reason.