Handles the placement of the stack a little nicer compared to the
previous code, which was off in a few ways. e.g.
The stack (new map) region, shouldn't be the width of the entire address
space if the size of the region calculation ends up being zero. It
should be placed at the same location as the TLS IO region and also have
the same size.
In the event the TLS IO region contains a size of zero, we should also
be doing the same thing. This fixes our memory layout a little bit and
also resolves some cases where assertions can trigger due to the memory
layout being incorrect.
Taking the json instance as a constant reference, makes all moves into
the parameter non-functional, resulting in copies. Taking it by value
allows moves to function.
A normal user shouldn't change this, as it will slow down the emulation and can lead to bugs or crashes. The renaming is done in order to prevent users from leaving this on without a way to turn it off from the UI.
Extracts out all of the thread local storage management from thread
instances themselves and makes the owning process handle the management
of the memory. This brings the memory management slightly more in line
with how the kernel handles these allocations.
Furthermore, this also makes the TLS page management a little more
readable compared to the lingering implementation that was carried over
from Citra.
This will be necessary for making our TLS slot management slightly more
straightforward. This can also be utilized for other purposes in the
future.
We can implement the existing simpler overload in terms of this one
anyways, we just pass the beginning and end of the ASLR region as the
boundaries.
The event should only be signaled when an output audio device gets changed. Example, Speaker to USB headset. We don't identify different devices internally yet so there's no need to signal the event yet.
DeltaFragments are only used to download and apply partial patches on a real console, and are not useful to us at all. Most patch NSPs do not include them, and when they do, it's a waste of space to install them.
StartLrAssignmentMode and StopLrAssignmentMode don't require any implementation as it's just used for showing the screen of changing the controller orientation if the user wishes to do so. Ever since #1634 this has not been needed as users can specify the controller orientation from the config and swap at any time. We store a private member just in case this gets used for anything extra in the future
InitializeApplicationInfoRestricted will need further implementation as it's checking for other user requirements about the game. As we're emulating, we're assuming the user owns the game so we skip these checks currently, implementation will need to be added further on
This PR attempts to implement the shared memory provided by GetSharedMemoryNativeHandle. There is still more work to be done however that requires a rehaul of the current time module to handle clock contexts. This PR is mainly to get the basic functionality of the SharedMemory working and allow the use of addition to it whilst things get improved on.
Things to note:
Memory Barriers are used in the SharedMemory and a better solution would need to be done to implement this. Currently in this PR I’m faking the memory barriers as everything is sync and single threaded. They work by incrementing the counter and just populate the two data slots. On data reading, it will read the last added data.
Specific values in the shared memory would need to be updated periodically. This isn't included in this PR since we don't actively do this yet. In a later PR when time is refactored this should be done.
Finally, as we don't handle clock contexts. When time is refactored, we will need to update the shared memory for specific contexts. This PR does this already however since the contexts are all identical and not separated. We're just updating the same values for each context which in this case is empty.
Tiime:SetStandardUserSystemClockAutomaticCorrectionEnabled, Time:IsStandardUserSystemClockAutomaticCorrectionEnabled are also partially implemented in this PR. The reason the implementation is partial is because once again, a lack of clock contexts. This will be improved on in a future PR.
This PR closes issue #2556
This is more representative of what actually occurs, as web does support remote URLs which wouldn't need a romfs callback. This paves for easy future support of this with a call like 'OpenPageRemote' or similar.
Even though it has been proven that IAudioRenderer:SystemEvent is
actually an automatic event. The current implementation of such event is
all thought to be manual. Thus it's implementation needs to be corrected
when doing such change. As it is right now this PR introduced a series
of regressions on softlocks on multiple games. Therefore, this pr
reverts such change until a correct implementation is made.
These source files have been unused for the entire lifecycle of the
project. They're a hold-over from Citra and only add to the build time
of the project, so they can be removed.
There's also likely no way this would ever work in yuzu in its current
form without revamping quite a bit of it, given how different the GPU on
the Switch is compared to the 3DS.
The old implementation had faulty Threadsafe methods where events could
be missing. This implementation unifies unsafe/safe methods and makes
core timing thread safe overall.
IPC-100 was changed to InitializeApplicationInfoOld instead of InitializeApplicationInfo. IPC-150 makes an indentical call to IPC-100 however does extra processing. They should not have the same name as it's quite confusing to debug.
These can be generified together by using a concept type to designate
them. This also has the benefit of not making copies of potentially very
large arrays.
This is performing more work than would otherwise be necessary during
VMManager's destruction. All we actually want to occur in this scenario
is for any allocated memory to be freed, which will happen automatically
as the VMManager instance goes out of scope.
Anything else being done is simply unnecessary work.
Given we don't currently implement the personal heap yet, the existing
memory querying functions are essentially doing what the memory querying
types introduced in 6.0.0 do.
So, we can build the necessary machinery over the top of those and just
use them as part of info types.
Treating it as a u16 can result in a sign-conversion warning when
performing arithmetic with it, as u16 promotes to an int when aritmetic
is performed on it, not unsigned int.
This also makes the interface more uniform, as the layout interface now
operates on u32 across the board.
Makes the dependency explicit in the TelemetrySession's interface
instead of making it a hidden dependency.
This also revealed a hidden issue with the way the telemetry session was
being initialized. It was attempting to retrieve the app loader and log
out title-specific information. However, this isn't always guaranteed to
be possible.
During the initialization phase, everything is being constructed. It
doesn't mean an actual title has been selected. This is what the Load()
function is for. This potentially results in dead code paths involving
the app loader. Instead, we explicitly add this information when we know
the app loader instance is available.
Previously, the code was accumulating data into a std::vector and then
tossing all of it away if a setting was disabled.
Instead, we can just check if it's disabled and do no work at all if
possible. If it's enabled, then we can append to the vector and
allocate.
Unlikely to impact usage much, but it is slightly less sloppy with
resources.
A few of the aoc service stubs/implementations weren't fully popping all
of the parameters passed to them. This ensures that all parameters are
popped and, at minimum, logged out.
These are only used from within this translation unit, so they don't
need to have external linkage. They were intended to be marked with this
anyways to be consistent with the other service functions.
Renames the members to more accurately indicate what they signify.
"OneShot" and "Sticky" are kind of ambiguous identifiers for the reset
types, and can be kind of misleading. Automatic and Manual communicate
the kind of reset type in a clearer manner. Either the event is
automatically reset, or it isn't and must be manually cleared.
The "OneShot" and "Sticky" terminology is just a hold-over from Citra
where the kernel had a third type of event reset type known as "Pulse".
Given the Switch kernel only has two forms of event reset types, we
don't need to keep the old terminology around anymore.
This reduces the boilerplate that services have to write out the current thread explicitly. Using current thread instead of client thread is also semantically incorrect, and will be a problem when we implement multicore (at which time there will be multiple current threads)
This corrects cases where it was possible to write more entries into the
write buffer than were requested. Now, we check the size of the buffer
before actually writing into them.
We were also returning the wrong value for
GetAvailableLanguageCodeCount2(). This was previously returning 64, but
only 17 should have been returned. 64 entries is the size of the static
array used in MakeLanguageCode() within the service binary itself, but
isn't the actual total number of language codes present.
The backend is not used until we decide to submit the testcase/telemetry, and creating it early prevents users from updating the credentials properly while the games are running.
Also introduced in REV5 was a variable-size audio command buffer. This
also affects how the size of the work buffer should be determined, so we
can add handling for this as well.
Thankfully, no other alterations were made to how the work buffer size
is calculated in 7.0.0-8.0.0. There were indeed changes made to to how
some of the actual audio commands are generated though (particularly in
REV7), however they don't apply here.
Introduced in REV5. This is trivial to add support for, now that
everything isn't a mess of random magic constant values.
All this is, is a change in data type sizes as far as this function
cares.
"Unmagics" quite a few magic constants within this code, making it much
easier to understand. Particularly given this factors out specific
sections into their own self-contained lambda functions.
These are actually quite important indicators of thread lifetimes, so
they should be going into the debug log, rather than being treated as
misc info and delegated to the trace log.
Makes the code much nicer to follow in terms of behavior and control
flow. It also fixes a few bugs in the implementation.
Notably, the thread's owner process shouldn't be accessed in order to
retrieve the core mask or ideal core. This should be done through the
current running process. The only reason this bug wasn't encountered yet
is because we currently only support running one process, and thus every
owner process will be the current process.
We also weren't checking against the process' CPU core mask to see if an
allowed core is specified or not.
With this out of the way, it'll be less noisy to implement proper
handling of the affinity flags internally within the kernel thread
instances.
Provides serialization/deserialization to the database in system save files, accessors for database state and proper handling of both major Mii formats (MiiInfo and MiiStoreData)
This option allows picking the compatibility profile since a lot of bugs
are fixed in it. We devs will use this option to easierly debug current
problems in our Core implementation.:wq
This is a holdover from Citra, where the 3DS has both
WaitSynchronization1 and WaitSynchronizationN. The switch only has one
form of wait synchronizing (literally WaitSynchonization). This allows
us to throw out code that doesn't apply at all to the Switch kernel.
Because of this unnecessary dichotomy within the wait synchronization
utilities, we were also neglecting to properly handle waiting on
multiple objects.
While we're at it, we can also scrub out any lingering references to
WaitSynchronization1/WaitSynchronizationN in comments, and change them
to WaitSynchronization (or remove them if the mention no longer
applies).
The actual behavior of this function is slightly more complex than what
we're currently doing within the supervisor call. To avoid dumping most
of this behavior in the supervisor call itself, we can migrate this to
another function.
This member variable is entirely unused. It was only set but never
actually utilized. Given that, we can remove it to get rid of noise in
the thread interface.
Essentially performs the inverse of svcMapProcessCodeMemory. This unmaps
the aliasing region first, then restores the general traits of the
aliased memory.
What this entails, is:
- Restoring Read/Write permissions to the VMA.
- Restoring its memory state to reflect it as a general heap memory region.
- Clearing the memory attributes on the region.
This gives us significantly more control over where in the
initialization process we start execution of the main process.
Previously we were running the main process before the CPU or GPU
threads were initialized (not good). This amends execution to start
after all of our threads are properly set up.
Initially required due to the split codepath with how the initial main
process instance was initialized. We used to initialize the process
like:
Init() {
main_process = Process::Create(...);
kernel.MakeCurrentProcess(main_process.get());
}
Load() {
const auto load_result = loader.Load(*kernel.GetCurrentProcess());
if (load_result != Loader::ResultStatus::Success) {
// Handle error here.
}
...
}
which presented a problem.
Setting a created process as the main process would set the page table
for that process as the main page table. This is fine... until we get to
the part that the page table can have its size changed in the Load()
function via NPDM metadata, which can dictate either a 32-bit, 36-bit,
or 39-bit usable address space.
Now that we have full control over the process' creation in load, we can
simply set the initial process as the main process after all the loading
is done, reflecting the potential page table changes without any
special-casing behavior.
We can also remove the cache flushing within LoadModule(), as execution
wouldn't have even begun yet during all usages of this function, now
that we have the initialization order cleaned up.
Now that we have dependencies on the initialization order, we can move
the creation of the main process to a more sensible area: where we
actually load in the executable data.
This allows localizing the creation and loading of the process in one
location, making the initialization of the process much nicer to trace.
Like with CPU emulation, we generally don't want to fire off the threads
immediately after the relevant classes are initialized, we want to do
this after all necessary data is done loading first.
This splits the thread creation into its own interface member function
to allow controlling when these threads in particular get created.
Our initialization process is a little wonky than one would expect when
it comes to code flow. We initialize the CPU last, as opposed to
hardware, where the CPU obviously needs to be first, otherwise nothing
else would work, and we have code that adds checks to get around this.
For example, in the page table setting code, we check to see if the
system is turned on before we even notify the CPU instances of a page
table switch. This results in dead code (at the moment), because the
only time a page table switch will occur is when the system is *not*
running, preventing the emulated CPU instances from being notified of a
page table switch in a convenient manner (technically the code path
could be taken, but we don't emulate the process creation svc handlers
yet).
This moves the threads creation into its own member function of the core
manager and restores a little order (and predictability) to our
initialization process.
Previously, in the multi-threaded cases, we'd kick off several threads
before even the main kernel process was created and ready to execute (gross!).
Now the initialization process is like so:
Initialization:
1. Timers
2. CPU
3. Kernel
4. Filesystem stuff (kind of gross, but can be amended trivially)
5. Applet stuff (ditto in terms of being kind of gross)
6. Main process (will be moved into the loading step in a following
change)
7. Telemetry (this should be initialized last in the future).
8. Services (4 and 5 should ideally be alongside this).
9. GDB (gross. Uses namespace scope state. Needs to be refactored into a
class or booted altogether).
10. Renderer
11. GPU (will also have its threads created in a separate step in a
following change).
Which... isn't *ideal* per-se, however getting rid of the wonky
intertwining of CPU state initialization out of this mix gets rid of
most of the footguns when it comes to our initialization process.
Some objects declare their handle type as const, while others declare it
as constexpr. This makes the const ones constexpr for consistency, and
prevent unexpected compilation errors if these happen to be attempted to be
used within a constexpr context.
These indicate options that alter how a read/write is performed.
Currently we don't need to handle these, as the only one that seems to
be used is for writes, but all the custom options ever seem to do is
immediate flushing, which we already do by default.
We need to ensure dynarmic gets a valid pointer if the page table is
resized (the relevant pointers would be invalidated in this scenario).
In this scenario, the page table can be resized depending on what kind
of address space is specified within the NPDM metadata (if it's
present).
Adjusts the interface of the wrappers to take a system reference, which
allows accessing a system instance without using the global accessors.
This also allows getting rid of all global accessors within the
supervisor call handling code. While this does make the wrappers
themselves slightly more noisy, this will be further cleaned up in a
follow-up. This eliminates the global system accessors in the current
code while preserving the existing interface.
Keeps the return type consistent with the function name. While we're at
it, we can also reduce the amount of boilerplate involved with handling
these by using structured bindings.
BitField has been trivially copyable since
e99a148628, so we can eliminate these
TODO comments and use ReadObject() directly instead of memcpying the
data.
Rather than make a full copy of the path, we can just use a string view
and truncate the viewed portion of the string instead of creating a totally
new truncated string.
In several places, we have request parsers where there's nothing to
really parse, simply because the HLE function in question operates on
buffers. In these cases we can just remove these instances altogether.
In the other cases, we can retrieve the relevant members from the parser
and at least log them out, giving them some use.
Applies the override specifier where applicable. In the case of
destructors that are defaulted in their definition, they can
simply be removed.
This also removes the unnecessary inclusions being done in audin_u and
audrec_u, given their close proximity.
Quite a bit of these were out of sync with Switchbrew (and in some cases
entirely wrong). While we're at it, also expand the section of named
members. A segment within the control metadata is used to specify
maximum values for the user, device, and cache storage max sizes and
journal sizes.
These appear to be generally used by the am service (e.g. in
CreateCacheStorage, etc).
We need to be checking whether or not the given address is within the
kernel address space or if the given address isn't word-aligned and bail
in these scenarios instead of trashing any kernel state.
For whatever reason, shared memory was being used here instead of
transfer memory, which (quite clearly) will not work based off the name
of the function.
This corrects this wonky usage of shared memory.
Given server sessions can be given a name, we should allow retrieving
it instead of using the default implementation of GetName(), which would
just return "[UNKNOWN KERNEL OBJECT]".
The AddressArbiter type isn't actually used, given the arbiter itself
isn't a direct kernel object (or object that implements the wait object
facilities).
Given this, we can remove the enum entry entirely.
Similarly like svcGetProcessList, this retrieves the list of threads
from the current process. In the kernel itself, a process instance
maintains a list of threads, which are used within this function.
Threads are registered to a process' thread list at thread
initialization, and unregistered from the list upon thread destruction
(if said thread has a non-null owning process).
We assert on the debug event case, as we currently don't implement
kernel debug objects.
Now that ShouldWait() is a const qualified member function, this one can
be made const qualified as well, since it can handle passing a const
qualified this pointer to ShouldWait().
Previously this was performing a u64 + int sign conversion. When dealing
with addresses, we should generally be keeping the arithmetic in the
same signedness type.
This also gets rid of the static lifetime of the constant, as there's no
need to make a trivial type like this potentially live for the entire
duration of the program.
This doesn't really provide any benefit to the resource limit interface.
There's no way for callers to any of the service functions for resource
limits to provide a custom name, so all created instances of resource
limits other than the system resource limit would have a name of
"Unknown".
The system resource limit itself is already trivially identifiable from
its limit values, so there's no real need to take up space in the object to
identify one object meaningfully out of N total objects.
Since C++17, the introduction of deduction guides for locking facilities
means that we no longer need to hardcode the mutex type into the locks
themselves, making it easier to switch mutex types, should it ever be
necessary in the future.
Since C++17, we no longer need to explicitly specify the type of the
mutex within the lock_guard. The type system can now deduce these with
deduction guides.
Based off RE, most of these structure members are register values, which
makes, sense given this service is used to convey fatal errors.
One member indicates the program entry point address, one is a set of
bit flags used to determine which registers to print, and one member
indicates the architecture type.
The only member that still isn't determined is the final member within
the data structure.
The kernel makes sure that the given size to unmap is always the same
size as the entire region managed by the shared memory instance,
otherwise it returns an error code signifying an invalid size.
This is similarly done for transfer memory (which we already check for).
This was initially added to prevent problems from stubbed/not implemented NFC services, but as we never encountered such and as it's only used in a deprecated function anyway, I guess we can just remove it to prevent more clutter of the settings.
Reports the (mostly) correct size through svcGetInfo now for queries to
total used physical memory. This still doesn't correctly handle memory
allocated via svcMapPhysicalMemory, however, we don't currently handle
that case anyways.
This will make operating with the process-related SVC commands much
nicer in the future (the parameter representing the stack size in
svcStartProcess is a 64-bit value).
These functions act in tandem similar to how a lock or mutex require a
balanced lock()/unlock() sequence.
EnterFatalSection simply increments a counter for how many times it has
been called, while LeaveFatalSection ensures that a previous call to
EnterFatalSection has occured. If a previous call has occurred (the
counter is not zero), then the counter gets decremented as one would
expect. If a previous call has not occurred (the counter is zero), then
an error code is returned.
In some cases, our callbacks were using s64 as a parameter, and in other
cases, they were using an int, which is inconsistent.
To make all callbacks consistent, we can just use an s64 as the type for
late cycles, given it gets rid of the need to cast internally.
While we're at it, also resolve some signed/unsigned conversions that
were occurring related to the callback registration.
One behavior that we weren't handling properly in our heap allocation
process was the ability for the heap to be shrunk down in size if a
larger size was previously requested.
This adds the basic behavior to do so and also gets rid of HeapFree, as
it's no longer necessary now that we have allocations and deallocations
going through the same API function.
While we're at it, fully document the behavior that this function
performs.
Makes it more obvious that this function is intending to stand in for
the actual supervisor call itself, and not acting as a general heap
allocation function.
Also the following change will merge the freeing behavior of HeapFree
into this function, so leaving it as HeapAllocate would be misleading.
In cases where HeapAllocate is called with the same size of the current
heap, we can simply do nothing and return successfully.
This avoids doing work where we otherwise don't have to. This is also
what the kernel itself does in this scenario.
Another holdover from citra that can be tossed out is the notion of the
heap needing to be allocated in different addresses. On the switch, the
base address of the heap will always be managed by the memory allocator
in the kernel, so this doesn't need to be specified in the function's
interface itself.
The heap on the switch is always allocated with read/write permissions,
so we don't need to add specifying the memory permissions as part of the
heap allocation itself either.
This also corrects the error code returned from within the function.
If the size of the heap is larger than the entire heap region, then the
kernel will report an out of memory condition.
The use of a shared_ptr is an implementation detail of the VMManager
itself when mapping memory. Because of that, we shouldn't require all
users of the CodeSet to have to allocate the shared_ptr ahead of time.
It's intended that CodeSet simply pass in the required direct data, and
that the memory manager takes care of it from that point on.
This means we just do the shared pointer allocation in a single place,
when loading modules, as opposed to in each loader.
This source file was utilizing its own version of the NSO header.
Instead of keeping this around, we can have the patch manager also use
the version of the header that we have defined in loader/nso.h
The total struct itself is 0x100 (256) bytes in size, so we should be
providing that amount of data.
Without the data, this can result in omitted data from the final loaded
NSO file.
Makes it more evident that one is for actual code and one is for actual
data. Mutable and static are less than ideal terms here, because
read-only data is technically not mutable, but we were mapping it with
that label.
In 93da8e0abf, the page table construct
was moved to the common library (which utilized these inclusions). Since
the move, nothing requires these headers to be included within the
memory header.
- GPU will be released on shutdown, before pages are unmapped.
- On subsequent runs, current_page_table will be not nullptr, but GPU might not be valid yet.
Given this is utilized by the loaders, this allows avoiding inclusion of
the kernel process definitions where avoidable.
This also keeps the loading format for all executable data separate from
the kernel objects.
Neither the NRO or NSO loaders actually make use of the functions or
members provided by the Linker interface, so we can just remove the
inheritance altogether.
This function passes in the desired main applet and library applet
volume levels. We can then just pass those values back within the
relevant volume getter functions, allowing us to unstub those as well.
The initial values for the library and main applet volumes differ. The
main applet volume is 0.25 by default, while the library applet volume
is initialized to 1.0 by default in the services themselves.
Rather than make a global accessor for this sort of thing. We can make
it a part of the thread interface itself. This allows getting rid of a
hidden global accessor in the kernel code.
This condition was checking against the nominal thread priority, whereas
the kernel itself checks against the current priority instead. We were
also assigning the nominal priority, when we should be assigning
current_priority, which takes priority inheritance into account.
This can lead to the incorrect priority being assigned to a thread.
Given we recursively update the relevant threads, we don't need to go
through the whole mutex waiter list. This matches what the kernel does
as well (only accessing the first entry within the waiting list).
* gdbstub: fix IsMemoryBreak() returning false while connected to client
As a result, the only existing codepath for a memory watchpoint hit to break into GDB (InterpeterMainLoop, GDB_BP_CHECK, ARMul_State::RecordBreak) is finally taken,
which exposes incorrect logic* in both RecordBreak and ServeBreak.
* a blank BreakpointAddress structure is passed, which sets r15 (PC) to NULL
* gdbstub: DynCom: default-initialize two members/vars used in conditionals
* gdbstub: DynCom: don't record memory watchpoint hits via RecordBreak()
For now, instead check for GDBStub::IsMemoryBreak() in InterpreterMainLoop and ServeBreak.
Fixes PC being set to a stale/unhit breakpoint address (often zero) when a memory watchpoint (rwatch, watch, awatch) is handled in ServeBreak() and generates a GDB trap.
Reasons for removing a call to RecordBreak() for memory watchpoints:
* The``breakpoint_data`` we pass is typed Execute or None. It describes the predicted next code breakpoint hit relative to PC;
* GDBStub::IsMemoryBreak() returns true if a recent Read/Write operation hit a watchpoint. It doesn't specify which in return, nor does it trace it anywhere. Thus, the only data we could give RecordBreak() is a placeholder BreakpointAddress at offset NULL and type Access. I found the idea silly, compared to simply relying on GDBStub::IsMemoryBreak().
There is currently no measure in the code that remembers the addresses (and types) of any watchpoints that were hit by an instruction, in order to send them to GDB as "extended stop information."
I'm considering an implementation for this.
* gdbstub: Change an ASSERT to DEBUG_ASSERT
I have never seen the (Reg[15] == last_bkpt.address) assert fail in practice, even after several weeks of (locally) developping various branches around GDB. Only leave it inside Debug builds.
Makes it an instantiable class like it is in the actual kernel. This
will also allow removing reliance on global accessors in a following
change, now that we can encapsulate a reference to the system instance
in the class.
Within the kernel, shared memory and transfer memory facilities exist as
completely different kernel objects. They also have different validity
checking as well. Therefore, we shouldn't be treating the two as the
same kind of memory.
They also differ in terms of their behavioral aspect as well. Shared
memory is intended for sharing memory between processes, while transfer
memory is intended to be for transferring memory to other processes.
This breaks out the handling for transfer memory into its own class and
treats it as its own kernel object. This is also important when we
consider resource limits as well. Particularly because transfer memory
is limited by the resource limit value set for it.
While we currently don't handle resource limit testing against objects
yet (but we do allow setting them), this will make implementing that
behavior much easier in the future, as we don't need to distinguish
between shared memory and transfer memory allocations in the same place.
With this, all kernel objects finally have all of their data members
behind an interface, making it nicer to reason about interactions with
other code (as external code no longer has the freedom to totally alter
internals and potentially messing up invariants).
After doing a little more reading up on the Opus codec, it turns out
that the multistream API that is part of libopus can handle regular
packets. Regular packets are just a degenerate case of multistream Opus
packets, and all that's necessary is to pass the number of streams as 1
and provide a basic channel mapping, then everything works fine for
that case.
This allows us to get rid of the need to use both APIs in the future
when implementing multistream variants in a follow-up PR, greatly
simplifying the code that needs to be written.
Previously this was required, as BitField wasn't trivially copyable.
BitField has since been made trivially copyable, so now this isn't
required anymore.
Relocates the error code to where it's most related, similar to how all
the other error codes are. Previously we were including a non-generic
error in the main result code header.
These can just be passed regularly, now that we use fmt instead of our
old logging system.
While we're at it, make the parameters to MakeFunctionString
std::string_views.
There's no real need to use a shared lifetime here, since we don't
actually expose them to anything else. This is also kind of an
unnecessary use of the heap given the objects themselves are so small;
small enough, in fact that changing over to optionals actually reduces
the overall size of the HLERequestContext struct (818 bytes to 808
bytes).
Now that we have the address arbiter extracted to its own class, we can
fix an innaccuracy with the kernel. Said inaccuracy being that there
isn't only one address arbiter. Each process instance contains its own
AddressArbiter instance in the actual kernel.
This fixes that and gets rid of another long-standing issue that could
arise when attempting to create more than one process.
Similar to how WaitForAddress was isolated to its own function, we can
also move the necessary conditional checking into the address arbiter
class itself, allowing us to hide the implementation details of it from
public use.
Rather than let the service call itself work out which function is the
proper one to call, we can make that a behavior of the arbiter itself,
so we don't need to directly expose those implementation details.
This will be utilized by more than just that class in the future. This
also renames it from OpusHeader to OpusPacketHeader to be more specific
about what kind of header it is.
Places all error codes in an easily includable header.
This also corrects the unsupported error code (I accidentally used the
hex value when I meant to use the decimal one).
Places all of the functions for address arbiter operation into a class.
This will be necessary for future deglobalizing efforts related to both
the memory and system itself.
Removes a few inclusion dependencies from the headers or replaces
existing ones with ones that don't indirectly include the required
headers.
This allows removing an inclusion of core/memory.h, meaning that if the
memory header is ever changed in the future, it won't result in
rebuilding the entirety of the HLE services (as the IPC headers are used
quite ubiquitously throughout the HLE service implementations).
Avoids directly relying on the global system instance and instead makes
an arbitrary system instance an explicit dependency on construction.
This also allows removing dependencies on some global accessor functions
as well.
Given we already pass in a reference to the kernel that the shared
memory instance is created under, we can just use that to check the
current process, rather than using the global accessor functions.
This allows removing direct dependency on the system instance entirely.
The comment already invalidates itself: neither MMIO nor rasterizer cache belongsHLE kernel state. This mutex has a too large scope if MMIO or cache is included, which is prone to dead lock when multiple thread acquires these resource at the same time. If necessary, each MMIO component or rasterizer should have their own lock.
This currently has the same behavior as the regular
OpenAudioRenderer API function, so we can just move the code within
OpenAudioRenderer to an internal function that both service functions
call.
This service function appears to do nothing noteworthy on the switch.
All it does at the moment is either return an error code or abort the
system. Given we obviously don't want to kill the system, we just opt
for always returning the error code.
Provides names for previously unknown entries (aside from the two u8
that appear to be padding bytes, and a single word that also appears
to be reserved or padding).
This will be useful in subsequent changes when unstubbing behavior related
to the audio renderer services.
This function is also supposed to check its given policy type with the
permission of the service itself. This implements the necessary
machinery to unstub these functions.
Policy::User seems to just be basic access (which is probably why vi:u
is restricted to that policy), while the other policy seems to be for
extended abilities regarding which displays can be managed and queried,
so this is assumed to be for a background compositor (which I've named,
appropriately, Policy::Compositor).
There's no real reason this shouldn't be allowed, given some values sent
via a request can be signed. This also makes it less annoying to work
with popping enum values, given an enum class with no type specifier
will work out of the box now.
It's also kind of an oversight to allow popping s64 values, but nothing
else.
This didn't really provide much benefit here, especially since the
subsequent change requires that the behavior for each service's
GetDisplayService differs in a minor detail.
This also arguably makes the services nicer to read, since it gets rid
of an indirection in the class hierarchy.
The kernel allows restricting the total size of the handle table through
the process capability descriptors. Until now, this functionality wasn't
hooked up. With this, the process handle tables become properly restricted.
In the case of metadata-less executables, the handle table will assume
the maximum size is requested, preserving the behavior that existed
before these changes.
The NVFlinger service is already passed into services that need to
guarantee its lifetime, so the BufferQueue instances will already live
as long as they're needed. Making them std::shared_ptr instances in this
case is unnecessary.
Like the previous changes made to the Display struct, this prepares the
Layer struct for changes to its interface. Given Layer will be given
more invariants in the future, we convert it into a class to better
signify that.
With the display and layer structures relocated to the vi service, we
can begin giving these a proper interface before beginning to properly
support the display types.
This converts the display struct into a class and provides it with the
necessary functions to preserve behavior within the NVFlinger class.
* Fixes Unicode Key File Directories
Adds code so that when loading a file it converts to UTF16 first, to
ensure the files can be opened. Code borrowed from FileUtil::Exists.
* Update src/core/crypto/key_manager.cpp
Co-Authored-By: Jungorend <Jungorend@users.noreply.github.com>
* Update src/core/crypto/key_manager.cpp
Co-Authored-By: Jungorend <Jungorend@users.noreply.github.com>
* Using FileUtil instead to be cleaner.
* Update src/core/crypto/key_manager.cpp
Co-Authored-By: Jungorend <Jungorend@users.noreply.github.com>
These are more closely related to the vi service as opposed to the
intermediary nvflinger.
This also places them in their relevant subfolder, as future changes to
these will likely result in subclassing to represent various displays
and services, as they're done within the service itself on hardware.
The reasoning for prefixing the display and layer source files is to
avoid potential clashing if two files with the same name are compiled
(e.g. if 'display.cpp/.h' or 'layer.cpp/.h' is added to another service
at any point), which MSVC will actually warn against. This prevents that
case from occurring.
This also presently coverts the std::array introduced within
f45c25aaba back to a std::vector to allow
the forward declaration of the Display type. Forward declaring a type
within a std::vector is allowed since the introduction of N4510
(http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4510.html) by
Zhihao Yuan.
A fairly trivial change. Other sections of the codebase use nested
namespaces instead of separate namespaces here. This one must have just
been overlooked.
Gets rid of the largest set of mutable global state within the core.
This also paves a way for eliminating usages of GetInstance() on the
System class as a follow-up.
Note that no behavioral changes have been made, and this simply extracts
the functionality into a class. This also has the benefit of making
dependencies on the core timing functionality explicit within the
relevant interfaces.
The necessity of this parameter is dubious at best, and in 2019 probably
offers completely negligible savings as opposed to just leaving this
enabled. This removes it and simplifies the overall interface.
Places all of the timing-related functionality under the existing Core
namespace to keep things consistent, rather than having the timing
utilities sitting in its own completely separate namespace.
This commit it automatically generated by command in zsh:
sed -i -- 's/BitField<\(.*\)_le>/BitField<\1>/g' **/*(D.)
BitField is now aware to endianness and default to little endian. It expects a value representation type without storage specification for its template parameter.
Converts many of the Find* functions to return a std::optional<T> as
opposed to returning the raw return values directly. This allows
removing a few assertions and handles error cases like the service
itself does.
A holdover from citra, the Horizon kernel on the switch has no
prominent kernel object that functions as a timer. At least not
to the degree of sophistication that this class provided.
As such, this can be removed entirely. This class also wasn't used at
all in any meaningful way within the core, so this was just code sitting
around doing nothing. This also allows removing a few things from the
main KernelCore class that allows it to use slightly less resources
overall (though very minor and not anything really noticeable).
No inheritors of the WaitObject class actually make use of their own
implementations of these functions, so they can be made non-virtual.
It's also kind of sketchy to allow overriding how the threads get added
to the list anyways, given the kernel itself on the actual hardware
doesn't seem to customize based off this.
This functions almost identically to DecodeInterleavedWithPerfOld,
however this function also has the ability to reset the decoder context.
This is documented as a potentially desirable thing in the libopus
manual in some circumstances as it says for the OPUS_RESET_STATE ctl:
"This should be called when switching streams in order to prevent the
back to back decoding from giving different result from one at a time
decoding."
In addition to the default, external, EDID, and internal displays,
there's also a null display provided as well, which as the name
suggests, does nothing but discard all commands given to it. This is
provided for completeness.
Opening a display isn't really a thing to warn about. It's an expected
thing, so this can be a debug log. This also alters the string to
indicate the display name better.
Opening "Default" display reads a little nicer compared to Opening
display Default.
This quite literally functions as a basic setter. No other error
checking or anything (since there's nothing to really check against).
With this, it completes the pm:bm interface in terms of functionality.
This appears to be a vestigial API function that's only kept around for
compatibility's sake, given the function only returns a success error
code and exits.
Since that's the case, we can remove the stubbed notification from the
log, since doing nothing is technically the correct behavior in this
case.
Looking into the implementation of the C++ standard facilities that seem
to be within all modules, it appears that they use 7 as a break reason
to indicate an uncaught C++ exception.
This was primarily found via the third last function called within
Horizon's equivalent of libcxxabi's demangling_terminate_handler(),
which passes the value 0x80000007 to svcBreak.
This is a function that definitely doesn't always have a non-modifying
behavior across all implementations, so this should be made non-const.
This gets rid of the need to mark data members as mutable to work around
the fact mutating data members needs to occur.
These values are not equivalent, based off RE. The internal value is put
into a lookup table with the following values:
[3, 0, 1, 2, 4]
So the values absolutely do not map 1:1 like the comment was indicating.
Avoids entangling the IPC buffer appending with the actual operation of
converting the scaling values over. This also inserts the proper error
handling for invalid scaling values.
This appears to only check if the scaling mode can actually be
handled, rather than actually setting the scaling mode for the layer.
This implements the same error handling performed on the passed in
values.
Within the actual service, it makes no distinguishing between docked and
undocked modes. This will always return the constants values reporting
1280x720 as the dimensions.
This IPC command is simply a stub inside the actual service itself, and
just returns a successful error code regardless of input. This is likely
only retained in the service interface to not break older code that relied
upon it succeeding in some way.
In many cases, we didn't bother to log out any of the popped data
members. This logs them out to the console within the logging call to
provide more contextual information.
Internally within the vi services, this is essentially all that
OpenDefaultDisplay does, so it's trivial to just do the same, and
forward the default display string into the function.