Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Input controls for Universal Dashboard
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Download the latest version of PowerShell Universal.
Examples and full solutions for PowerShell Universal.
The Ironman Software blog has articles about PowerShell Universal.
Connect with the PowerShell Universal community.
Purchase a license for the features of PowerShell Universal.
File a bug report or feature request for PowerShell Universal.
Get started with PowerShell Universal
You'll need to install the PowerShell Universal server. There are a lot of ways to do so, but you can use the command line below to get started quickly:
You can install PowerShell Universal as a service. Ensure that PowerShell is running as administrator, or the service won't install correctly.
You can install PowerShell Universal using the following shell script:
You can install PowerShell Universal using the Universal PowerShell module:
By default, PowerShell Universal runs on port 5000 of localhost.
The first run wizard will step you through the basic settings of PowerShell Universal. This includes the default admin username and password, security settings, telemetry settings and license.
The admin account is used to login to PowerShell Universal. It will display a warning if the password does not match the complexity requirements. You can always change it later.
Select from the drop down of security settings. They tweak certain features of PowerShell Universal in different levels of security. If you plan on cloning from a git repository, skip this step or set it to default.
PowerShell Universal can send anonymous telemetry data if you opt-in to do so. If you plan to clone from a git repository, skip this setting.
Add your license file. This is optional and needs to be an account-based license key.
APIs allow you to call PowerShell scripts over HTTP. To create an API, click API \ Endpoints and click Create New Endpoint. Specify a URL.
Next, click details on your new API and enter the following command into the editor:
Save the script and then click the Execute button to test it out.
You can also execute the API via Invoke-RestMethod
.
To create a script, click Automation \ Scripts and then click Create New Script.
Enter the following script into the editor and save:
Once the script is saved, click Run.
To create a new PowerShell-based user interface (app), you can click User Interfaces \ Apps and then Create New App.
After clicking Ok, click the Details button to edit the PowerShell script. Add the following script to the editor:
Save the app, click the Restart button and then click the View button. Click the Click Me button.
Learn more about the various features of PowerShell Universal:
This page provides installation and configuration information for Docker.
NOTE: Apple M1 devices: At the time of writing there are some issues on Apple M1 devices and, some ARM64/ARMv8 devices. Please review this forum thread before proceeding.
Run the following command to confirm Docker is installed:
Example Output:
Docker Compose v1 uses the command docker-compose
. As of June 2023, support ends for Docker Compose v1.
Docker Compose v2 uses the command docker compose
.
If you are using Docker Compose v1 please adjust the commands accordingly. More information on Docker Compose can be found here.
Run one of the following commands to confirm that Docker Compose is installed:
Docker Compose v1:
Docker Compose v2:
Example Output:
To ensure that Docker has the ability to pull and run container images run the following command:
Example Output:
In order to run PowerShell Universal, use the provided container image. The docker image is available on Docker Hub.
The prebuilt version supports both free & paid features of PowerShell Universal.
Start the container by pulling the image and then running a container with the default port bound.
If port 5000 is unavailable on your host, switch to another port.
e.g. Present on port 80
The docker run
command allows you to mount a volume for persistent storage. Mount the volume to the /root folder.
Mount a volume on container in Windows
The following command mounts the folder C:\docker\volumes\PSU
to /root
on your container:
Mount a volume on Container on Mac and Linux
The following command mounts the folder /docker/volumes/PSU
to /root
on your container:
The following command removes a stopped container named PSU
:
The following command stops a container named PSU
:
The --force
flag can remove a running container:
Docker Compose allows you to use a yaml text file to standardize your build and script the deployment (or build) or multiple containers.
The default name for any compose file is docker-compose.yml
. It is recommended you use this as your compose filename.
The following compose file runs a Powershell Universal container in Windows:
The following compose file runs a Powershell Universal container on Mac and Linux:
Using a Terminal shell or PowerShell for Windows. Use the cd command to change the working directory with your docker-compose.yml
script.
Run the following command:
Example Output:
Using a Terminal shell, or PowerShell for Windows. cd to the directory with your docker-compose.yml
script.
Run the following command
Example Output:
You can add Environment variables into your Compose Scripts. Below is an example of:
Setting a node name
Adding SQL persistence
Adding a SQL Connection String
You can add Environment variables into your Compose Scripts. Below is an example of:
Setting a node name
Adding PostgreSQL persistence
Adding a PostgreSQL Connection String
If you wish to build more features, modify, or hardcode Environment Variables into your container, then create a Dockerfile
NOTE: Dockerfiles' are case-sensitive and must start with a capital 'D'.
To create a Docker image that can persist the Universal data, create a dockerfile like the one below.
This Dockerfile exposes port 5000, creates a /data volume, sets configuration environment variables to store the Universal repository and database in the volume and then sets the Universal.Server as the entry point to the container.
From the path that hosts your Dockerfile, run the following command:
Run a build with the build command:
Start the docker container with the run command and make sure to specify the volume to mount:
To use SQL persistence, define the plugin and connection string as follows:
To use PostgreSQL persistence, define the plugin and connection string as follows:
To properly support time zones on Linux when scheduling jobs, include the tzdata
package in your dockerfile along with an environment variable that specifies the server time zone.
We publish the following tags to Docker Hub:
latest - Current version using Ubuntu LTS
5.x-preview-modules - Nightly build of version 5 using Ubuntu LTS and select AZ modules
5.x-preview-<OS>-<PS> - Nightly build of version 5 with the specified OS and PS version
4.x-preview-<OS>-<PS> - Nightly build of version 4 with the specified OS and PS version
5.x-<OS>-<PS> - Production version 5 with the specified OS and PS version
5.x-modules - Current production version on Ubuntu LTS with select AZ modules installed
4.x-<OS>-<PS> - Current production version 4 with the specified OS and PS versions
The module container images include the following modules:
Az.Accounts
Az.Compute
Az.KeyVault
Az.Resources
Invoke-SqlCmd2
This basic "How to Get Started" enables you to start running or building PSU Containers. This references section links all sources for commands:
https://docs.docker.com/engine/reference/commandline/run/
https://docs.docker.com/engine/reference/commandline/stop/
https://docs.docker.com/engine/reference/commandline/rm/
https://docs.docker.com/compose/
https://docs.docker.com/engine/reference/commandline/build/
New features in PowerShell Universal v5.
The admin console has been rebuilt using Blazor for ASP.NET. The look and feel are the same but more tightly associated with the backend Universal platform.
The PowerShell Universal Portal provides a simple-to-use access point for consumers of PSU resources. You can assign resources by role, and they will be grouped by tags in a searchable interface without the complexities of the admin console.
Portal Pages and Widgets provide easy-to-use UI components that you can visually position on pages, which can be assigned to roles. Widgets are built using Blazor and PowerShell accept parameters, and they are interactive.
The PowerShell Universal Gallery is now integrated directly in PowerShell Universal. Access pre-built solutions for your PowerShell Universal environment.
You can view the Gallery repository here.
Authorization within the platform is now configured via a granular permission system that controls which users have access to which resources. This also includes new roles for specific feature groups, so administrators do not need to configure privileges for every scenario.
PostgreSQL is now supported as a persistence store. PostgreSQL is open source and free.
PowerShell Universal v5 is built on .NET 8 and PowerShell 7.4.
The Universal module now uses gRPC for all communication with the system. gRPC is an interprocess communication technology that is fast and runs over HTTP. By unifying on a single technology, the cmdlets now take advantage of all the granular privileges and help reduce technical debt in the platform.
The Agent environment has been replaced with Windows PowerShell 5.1 and PowerShell 7 environments. These environments host the PowerShell engine, but they allow for better control of assembly loading to ensure more modules are compatible with PowerShell Universal. While pwsh.exe is still supported, we suggest using PowerShell 7 when possible. The Windows PowerShell 5.1 environment is now a requirement for running this version of PowerShell, and powershell.exe is no longer supported.
Uninstall PowerShell Universal
Depending on how you installed PowerShell Universal, you will need to uninstall the application files.
If you installed using a provided ZIP file, you can simply stop the PowerShell Universal process or service and delete the folder you extracted to.
If you installed with the Windows MSI, uninstall the application from Add\Remove Programs.
The Universal module installs the application files to the following locations by default.
%ProgramData%\PowerShellUniversal
%HOME%/.PowerShellUniversal
Configuration files are stored in the repository folder. Once you have removed the application files, you can delete the configuration files. They are stored in the following locations by default:
%ProgramData%\PowerShellUniversal
%ProgramData%\UniversalAutomation
%HOME%/.PowerShellUniversal/
%HOME%/.PowerShellUniversal/
%AppData%\PowerShellUniversal
Removing the database depends on the database type used.
SQLite databases are stored in a single file on the file system.
%ProgramData%\PowerShellUniversal\database.db
%HOME%/.PowerShellUniversal/database.db
PostgreSQL and SQL databases are stored on your SQL server and will require you to manually remove the database.
You need to remove the IIS App Pool and Website when removing PowerShell Universal. Note that App Pools can be shared amongst websites and caution should be taken when doing so.
Migrate or restore configuration of a PowerShell Universal system.
It is often desirable to migrate a PowerShell Universal server configuration from one machine to another. This can be due to change of infrastructure or restoring from backup. This can be for operating system upgrades or general data center maintenance.
This document explains the steps necessary to migrate PowerShell Universal configuration to another machine.
We recommend stopping the PowerShell Universal service before performing the migration or restore.
Depending on the type of migration or restoration, you may not need to perform all of these actions.
The configuration data files are stored in %ProgramData%\UniversalAutomation\Repository
by default. These will consist of features such as APIs, Scripts and Apps. The entire directory is necessary for the configuration of the target system to function.
You can either copy the folder manually or via PowerShell. Ensure that you include all subdirectories.
The database migration will depend on the type of database used.
You will need to copy the SQLite database file to the configured, or default, database location. On a default installation, this will be %ProgramData%\UniversalAutomation\database.db
. The target machine account or service account will need read and write access to this database file.
Because these databases are stored outside of the PowerShell Universal server, you do not need to perform a migration of the database itself. You will need to ensure that the target server has network access to the SQL host.
The PowerShell Universal appsettings.json
file is necessary for providing the appropriate server settings to the platform. By default, this is stored in %ProgramData%\PowerShellUniversal\appsettings.json
. You will need to copy this to the new server in the same location.
This file contains configuration settings such as HTTP certificate, authentication, git sync settings, API configuration options and more.
appsettings.json files do not change between upgrades and you likely not need to perform this action during a restore.
PowerShell Universal includes 3 built-in secret vaults that may need to be migrated. The database vault is included with the database migration and does not require extra steps. If you are performing a restore, it's unlikely you will need to perform these restore operations unless the secret vaults have become corrupted.
The SecretStore module uses a user-specific storage location to ensure that ACLs are enforced on the files themselves. You will need to ensure that you copy the vault's contents to the account of the user that will be running PowerShell Universal on the new system.
The BuiltInLocalVault
is only available on Windows and uses Credential Manager to store secrets. You will need to recreate these secrets in the Credential Manager store on the new system.
Within Credential Manager, you will find PowerShell secrets stored with a ps:
prefix.
While it's not possible to extract credentials directly from the Credential Manager UI, you can use the Secret Management modules directly. To retrieve secrets, you can do the following.
On the new server, you can do the reverse and call Set-Secret
. Note that these commands need to run as the service account running PowerShell Universal in order to store them properly in the Credential Manager account for the user.
Certain authentication types will require configuration outside of PowerShell Universal. Unless you are moving the machine running PowerShell Universal or changing the accessible URLs, you will not need to perform these actions.
Ensure that the proper sign-on URLs are configured in your Identity provider (e.g. Azure AD or Okta) if the host name of the server is changing. Without properly configured sign-on URLs, users will not be able to sign on the new system.
There may be other resources that PowerShell Universal uses on the system that should be taken into account when migrating or restoring servers. Typically, you will not need to worry about these resources during a restore as they should remain the same if the machine has not changed.
PowerShell Modules
Environment Variables
Local Account Privileges
File System Permissions
Proxy Configuration
Certificates
Git SSH keys or credentials
DNS Settings
During the MSI install, leave all settings as default. We recommend leaving the service account blank and unchecking the box that states to start the PowerShell Universal service after the installation is complete.
After the installation completes, the service will be created but not running. Open Service Control Manager (services.msc
) and set the service account for the PowerShell Universal service. Start the service.
When migrating a PowerShell Universal service, you may run into issues the arise from configuration differences between the two systems. The following are places to look for more information.
if the service starts and stops, there may be an issue with the database access. We recommend looking in the Application log within Event Viewer. PowerShell Universal will report two application Errors that will include .NET in the name. The second of the two errors will provide a human readable exception with more details.
PowerShell Universal will write system logs to the %ProgramData%\PowerShellUniversal directory. Search for strings starting with [ERR]
to gather more information about issues with the installation.
After migrating the service, check for any error notifications that may indicate misconfiguration of the system.
The ultimate command center for your PowerShell environment.
PowerShell Universal provides a centralized location to store and run scripts, build modules, expose REST APIs and share them with end users via automatic or custom user interfaces, setup schedules and more.
Transform your PowerShell scripts into RESTful HTTP and WebSocket APIs for seamless integration across platforms. Leverage OpenAPI provide documentation and additional automation opportunities.
Streamline your automation with an intuitive web interface for executing, scheduling, securing, and auditing PowerShell scripts. Effortlessly manage tasks and access in-browser terminals to maximize efficiency and control in your automation workflows.
Create fully customizable, interactive web apps tailored for your internal users using PowerShell. With over 70 versatile controls and seamless integration to other PowerShell modules and scripts, you can build powerful interfaces to enhance productivity and collaboration.
PowerShell Universal is cross-platform and can be hosted on-premises, in the cloud or even on a Raspberry Pi.
PowerShell Universal is a versatile, cross-platform solution that adapts to your hosting needs. Deploy seamlessly on AWS, Azure, IIS, or your on-premises infrastructure—even on compact devices like a Raspberry Pi. Flexibility meets power for automation anywhere.
PowerShell Universal delivers a seamless development experience with built-in tools like IntelliSense, syntax highlighting, error checking, formatting, and debugging—all accessible directly from your browser. Enhance productivity further with a dedicated VS Code extension and integrated Git support for streamlined version control.
With support for PowerShell 7 and Windows PowerShell, seamless integration with PowerShell modules, and powerful tools for variable and secret management, it adapts to your needs. Enjoy multilingual support, compatibility with multiple database types (SQLite, SQL Server, PostgreSQL), and built-in high availability and load balancing for enterprise-grade scalability.
Join the thriving PowerShell Universal community and connect with like-minded professionals. Participate in our active forum for support and collaboration, explore our open-source gallery of pre-made solutions, and stay informed with our transparent roadmap and bug tracker.
Licensing options for PowerShell Universal
PowerShell Universal is licensed per server. We provide licenses for individuals and organizations.
A server is a single running instance of PowerShell Universal.
The license applies to each container instance and not the container host. For example, if you have 10 container instances running, you will need 10 licenses.
Each website running PowerShell Universal will need a license and not a single license for the entire IIS server.
To install a license, click Settings \ License. Click the Add License button to upload your license file. You can also install licenses using the Set-PSULicense
cmdlet. Offline licenses do not require an internet connection but will need to be reinstalled when the subscription expires, in you wish to update the version of PowerShell Universal. Online licenses require an internet connection and access to https://ironmansoftware.com
in order to verify subscription status.
You can use the PSULICENSE
environment variable to set a license. The value of this environment variable needs to be the contents of the license file.
Proxy configuration can be done by clicking Settings \ General and configuring the proxy URI and, optionally, credentials. You can also configure proxy settings with the Set-PSUSetting
cmdlet.
When using account-based licensing, you will enter your account's license key. Whenever you activate a PowerShell Universal server, it will assign a license to computer. This license key does not change so there is no need to install a new license when renewing. You can view the assigned computers in your Ironman Software account.
The PowerShell Universal server needs to have access to ironmansoftware.com.
Offline license files are required for environments that do not have internet access. You will need to install a new license file when you plan to upgrade to a version past the expiration date of the license.
Online licenses work the same as offline but check the subscription status on ironmansoftware.com. The license is tied to a specific subscription and may require a change after renewal. We recommend account-based licensing over online licenses.
When a server license is purchased, you will be able to generate developer licenses for users building solutions for your team. Their intent is to be used by individual developers in their local environments. Do not use developer licenses when hosting a server for remote access for testing or production. Instances of PowerShell Universal running with a Developer License will display a water mark in the admin console and any apps stating they are intended only for development purposes.
You can generate a developer license on the Settings \ License page by clicking the Generate Developer License button.
The following features of PowerShell Universal require a license.
Debugging Tools
Enterprise Authentication
OpenID Connect
SAML2
WS-Federation
Windows Authentication
Custom Authentication Scripts
Client Certificate
App Tokens
Enterprise Authorization
Permissions
Custom Authorization Scripts
Platform
Git Support
Module Management
Non-Database Credential Vaults
SQL Support
PostgreSQL Support
Published Folders
Cache Management
Computer Groups
Translations
Settings
Branding
Tags
APIs
Event Hubs
OpenAPI Documentation
Automation
Triggers
Terminals
Tests
Apps
App Page Editor
App Function Editor
Learn how to revert to a downgrade level of PowerShell Universal.
Downgrading can be complicated and error prone. We recommend restoring from a backup or snapshot instead of downgrading.
Major versions may include breaking changes. Minor versions may have additional cmdlets or parameters but will not have any breaking changes.
It is much easier to restore from a backup of the configuration files before the upgrade rather than manually updating files.
Downgrading the database schema can be a destructive operation. You may remove tables and columns that contain data. Always backup a database before performing these operations.
Below is an example of downgrading the schema of a SQLite database to version 5.3.0. You will need to stop the PowerShell Universal services before doing so.
You can downgrade a nightly build version of the database to a stable version of the database to allow for an upgrade to the stable build of the target version. You will lose any data found in new columns or tables.
Downgrading the application files is typically a simple process and depends on how you installed the product. You will need to perform the configuration file and database downgrades before performing the application downgrade.
To downgrade an MSI installation, you will need to first uninstall the current version. PowerShell Universal will not allow you to run a downgrade. After the uninstall is complete, perform an installation of the target version.
To downgrade a ZIP installation, simply delete the PowerShell Universal application files. Once the directory is clear unzip the target version's ZIP into the installation directory. Ensure that you run Get-ChildItem -Recurse | Unblock-File
after doing so.
Similar to the ZIP installation, remove the old version's files and unzip and unblock the target version's files. Ensure that the web site and App Pool are stopped before attempting so.
If you are restoring from backup, you may need to the schema if you upgraded the version.
If you are restoring from backup, you may need to the schema if you upgraded the version.
The PSUSecretStore
vault uses the Microsoft's . This module stores secrets, on disk, using symmetric encryption. A default encryption key is included with PowerShell Universal installations. If the key was updated, the new key will be in the appsettings.json file you migrated in the previous step. You will also need to move the physical secret store to the new server's file system.
requires the setup of an SPN for the service account running the PowerShell Universal service. Ensure this SPN is in place before attempting to use Windows authentication with the new system.
Once all the following steps have been taken, you can now install PowerShell Universal on the new server. If you are downgrading during a restore, please follow the documentation.
The MSI package for PowerShell Universal installs the platform as a Windows service that hosts its own web server called Kestrel. To install the service, and run it. We recommend using the exact same version as the source server.
Universal is licensed per server. Visit our on pricing.
Many features of PowerShell Universal are .
You can purchase a license on .
In some scenarios it may be required to roll back the version of PowerShell Universal. This could be due to a feature change or bug that affects the system in a way too impactful to continue with the version. We validating a version in a development or quality assurance environment before upgrading in production to avoid having to perform a downgrade.
Downgrading the configuration files will require removing or altering the .universal
repository files to remove or rename new parameters. New cmdlets will be ignored by PowerShell Universal. If a cmdlet was renamed, it may have to be updated as well. You will need to refer to the to see which cmdlets have changed in each version.
You can find information about each configuration file in the .
Restoring the database to a previous version requires downgrading the schema. This can be accomplished with . Using the schema
command, you will be able to select the down-level version.
Optional*: Windows PowerShell v5.1 or later
Optional*: PowerShell v7.2 or later
.NET Framework v4.8.0 or later (only for Windows PowerShell)
Optional*: PowerShell v7.2 or later
Validated Distributions: Ubuntu 18.04 and 20.04
Optional*: PowerShell v7.2 or later
Universal uses a variety of modern web frameworks and can have issues with older browsers such as Internet Explorer.
Recent versions of the following web browsers are supported: Chrome, Firefox, Safari, and Microsoft Edge.
When considering a browser, you will need to understand that certain features are required. They include:
About PowerShell Universal REST APIs.
Universal provides the ability to define REST API endpoints using PowerShell. When the endpoints are executed by a compatible HTTP client, the PowerShell script will execute and return the result to the end user.
The REST API execution environment runs in your default PowerShell version. Unlike Automation jobs, which can also be run via the Universal management API, APIs that you define are run in a single PowerShell process. Because the PowerShell process is not started and stopped for each call to the endpoint, the API is much faster.
You can define the environment that runs the PowerShell Universal API process by specifying the -ApiEnvironment
on Set-PSUSetting
. Changing this setting will cause the API process to restart.
You can also define the environment used by specifying the Environment on the endpoint itself.
Performance is relative to the hardware and network conditions that you are running Universal on. That said, in ideal conditions you can expect the Universal APIs to service about 500 requests per second. This is with an entirely empty endpoint so any script that you add to that endpoint will reduce the throughput. The reduction of throughput will depend on the cmdlets and script executed within the API endpoint. There is no hard limit.
See https://blog.ironmansoftware.com/webapp-benchmark-siege/ for detailed information about benchmark tests on Universal APIs.
Variables are listed on the variables page.
Standardized documentation for your endpoints.
You can view the Managment API documentation by visiting the built in Swagger dashboard.
To create an OpenAPI definition, click APIs \ Documentation and then Create new Endpoint Documentation. You can set the name, URL, description and authentication details for the documentation.
Once created, you can assign endpoints to the documentation by editing the endpoint.
The documentation for your endpoint will appear within the Swagger dashboard. Select the definition with the Select a definition dropdown.
All your custom endpoints will be listed.
You can specify help text for your APIs using comment-based help. Including a synopsis, description and parameter descriptions will result in each of those pieces being documented in the OpenAPI documentation and Swagger age.
For example, with a simple /get/:id
endpoint, we could have comment-based help such as this.
The resulting Swagger page will show each of these descriptions.
Types can be defined within an endpoint documentation ScriptBlock. Click the Edit Details button on the API documentation record.
APIs can also be documented using input and output types by creating a PowerShell class and referencing it within your comment-based help. PowerShell Universal takes advantage of the .INPUTS
and .OUTPUTS
sections to specify accepted formats and define status code return values.
Within the .INPUTS
and .OUTPUTS
, you will define a YAML block to provide this information. To create types, use the Endpoint Documentation editor. This file is loaded when reading OpenAPI documents. This information is stored in endpointsDocumentation.ps1
.
Input types are defined in the .INPUTS
section. This section is a YAML block that defines if the input is required, provides a description and specifies the content type. This is a content type followed by the PowerShell class you defined in the endpoint documentation.
Output types are similar to input but are specified on return codes as well as their content type and PowerShell class. The below example returns an ADAccountType class when a HTTP OK (200) is returned from the API. A 400 (Bad Request) does not return data but does provide a description that will be displayed in the API documentation.
Receive client events from the PowerShell Universal server.
Event Hubs provide the ability to connect client to the PowerShell Universal server. Once connected, the PowerShell Universal server can send messages to the connected clients and they will run a local PowerShell script block.
To create an event hub, click APIs \ Event Hub and click Create New Event Hub. Event Hubs are named and can choose to enforce authentication and authorization.
From within the PowerShell Universal server, you can send events from a hub to connected clients using the Send-PSUEvent
cmdlet.
The -Data
parameter accepts an object and will be serialized using CLIXML and send to the client. The data will be deserialized before passing to the script block.
You can also run commands. This does not require defining a script on the event hub client. You can also use the Invoke-PSUCommand
alias to mimic native PowerShell behavior.
This feature is only available when sending data to an individual client, rather than all clients connected to a hub.
This example allows for sending scripts to remote machines and executing them with a generic event hub script.
First, create an event hub in PowerShell Universal. This example does not use authentication.
Next, install the Event Hub Client on the remote machine. Create a configuration file in %ProgramData%\PowerShellUniversal\eventHubClient.json
.
Next, create a helper script.ps1 to receive the event hub data and process requests from PSU to invoke scripts. It creates a new temporary PS1 file and uses the $EventData
passed down from the event hub message with the contents and parameters for the script.
In PowerShell Universal, add a script that you want to run on the remote machine. In this example, it simply starts a process.
Finally, add another script that sends the event down to the client. This could be from an API or an App as well. It uses Get-PSUEventHubConnection
to get the target computer’s connection ID and then sends an event with the contents of a script and any parameters for that script. Because the script on event hub side is generic, it will just run whatever is passed to it.
From here you could event use the script to schedule jobs to run on the remote machines using the event hub client.
Authentication and authorization for REST APIs.
Once enabled, you will be able to enforce authentication and authorization on your endpoints.
You can define secure endpoints in the UI by enabling authentication. You will endpoint authentication and authorization under the Security tab of an endpoint's properties.
You can also define secure endpoints using the .universal/endpoints.ps1
file or the Management API using New-PSUEndpoint
.
When authentication is enabled, it will enforce the use of one of the configured authentication methods. APIs support the following methods.
JWT App Tokens
Windows Authentication
Cookie Authentication
Basic Authentication
Once you have defined a secure endpoint, you will need to provide authentication and authorization to access the endpoint.
Note that if you are hosting in IIS and do not have Anonymous Authentication enabled, you will not be able to pass app tokens to the PowerShell Universal server.
To authenticate with tokens, first, you need generate a new app token for use. You can use the Grant-PSUAppToken
cmdlet to do so remotely or you can create an app token in the UI using the Settings Security AppTokens tab.
Hover over your user name in the top right of the admin console, click Tokens and click Create Application Token.
Once you have created your app token, you can now use it to authenticate against the secure endpoint. To do so, pass the Authorization header along with the request.
To authenticate with cookies, you will first need to call the login API to receive a valid cookie from the system. You can use Invoke-WebRequest
to do so. Pass the user name and password as the body. Specify the -SessionVariable
parameter to establish a session.
Once you have successfully authenticated, you can use your $mySession
variable to call secure endpoints.
In addition to creating endpoints that require authentication, you can also enforce roles by define a role in the New-PSUEndpoint
cmdlet or by selecting one in the UI. If a role is selected, it's possess the role.
Windows and Cookie authentication will assign roles based on the Identity of the user and the role policies as they are applied.
JWT app tokens will use the role that was defined when they were generated.
Error handling for Universal API.
By default, endpoints will return a 200 OK message even if there are errors. If an error occurs, you will get a blank response from the endpoint. This document demonstrates different ways to handle errors within APIs.
To automatically return errors from APIs, you can change the default behavior by setting the -ErrorAction
parameter of New-PSUEndpoint
to Stop
. Any errors will cause an 500 Internal Server Error to be returned with a list of the errors and stack trace.
Terminating errors will always return a 500 Internal Server Error.
You will notice different behavior in Windows PowerShell and PowerShell 7 when calling REST APIs that return errors. In Windows PowerShell, you will receive a generic error that doesn't return the error message.
In PowerShell 7, when an error is returned, you will see the error message returned.
You can retrieve the error message in Windows PowerShell, by using the following syntax.
To manually return errors, you need to use the New-PSUApiResponse
cmdlet. This cmdlet allows you to define the status code and body for the response.
In this example, we are returning a 404 error code from the endpoint.
Similar to the automatic error codes, error codes returned manually will as display better in PowerShell 7. Here's an example of calling the endpoint.
If called from Windows PowerShell, you will receive an error similar to the one returned automatically.
You can choose to return error codes if certain conditions are met by using your PowerShell script within the endpoint.
Rate limiting options for Universal.
PowerShell Universal lets you rate limit requests made to the web server. You can configure rate limiting per endpoint and per period. By default, the client IP address rate limits clients.
Configuration data for rate limits are stored in the ratelimits.ps1
file.
To configure rate limiting, visit the APIs / Rate Limiting page. Click the Add button and define a new rate limit rule.
Rate limiting affects all URLs for the server. If you enforce rate limiting that isn't correctly configured, you can negatively affect the management API.
The method is the HTTP method to for this rule. If you use *
, this rule affects all HTTP methods. You can also select a single method by picking it from the drop down.
The endpoint is the URL that you are rate limiting. You can rate limit all URLs by using a *
. You can define specific URLs by defining the relative path: /api/user
.
This is the number of requests in the time frame before rate limiting kicks in.
This is the period over which the rate limit is counted. For example, if you select a period of 10 minutes and a limit of 100, then up to 100 requests can be made to the method and endpoint you have selected.
To disable rate limiting for particular IP Addresses, clients, and endpoints, add them to the rate limiting allow lists. Find these by clicking the settings button.
Jobs are the history of scripts that have been run.
Jobs are the result of running a script. Jobs are retained based on the script and server level settings.
Jobs can be viewed by clicking the Automation / Jobs page. Click the View button to navigate to the job. Jobs in progress can also been cancelled.
Standard PowerShell streams such as information, host, error, warning and verbose are shown within the output pane.
Pipeline output for jobs is also stored within PowerShell Universal. Any object that is written to the pipeline is stored as CliXml and available for view within the Pipeline Output tab.
You can expand the tree view to see the objects and properties from the pipeline.
Any errors written to the error stream will be available on the Error tab within the job page.
Jobs will return various statuses depending on configuration and the result of the execution. Settings that can affect job status include:
ErrorActionPreference
WarningActionPreference
The following table describes how PowerShell Universal treats statuses.
Some jobs will require feedback. Any script that contains a Read-Host
call will wait until there is user interaction with that job. The job will be in a Waiting for Feedback state, and you can respond to that feedback by click the Response to Feedback button on the job page.
To accept a SecureString
with a password input field, you can use the -AsSecureString
parameter of Read-Host
.
You can also call UA scripts from UA scripts. When running a job in UA, you don't need to define an app token or the computer name manually. These will be defined for you. You can just call Invoke-PSUScript
within your script to start another script. Both jobs will be shown in the UI. If you want to wait for the script to finish, use Wait-PSUJob
.
You can use the Wait-PSUJob
cmdlet to wait for a job to finish. Pipe the return value of Invoke-PSUScript
to Wait-UAJob
to wait for the job to complete. Wait-PSUJob
will wait indefinitely unless the -Timeout
parameter is specified.
You can use the Get-PSUJobPipelineOutput
cmdlet to return the pipeline output that was produced by a job. This pipeline output will be deserialized objects that were written to the pipeline during the job. You can access this data from where you have access to the PowerShell Universal Management API.
It may be required to return the output from a script's last job run. In order to do this, you will need to use a combination of cmdlets to retrieve the script, the last job's ID and then return the pipeline or host output.
The following example invokes a script, stores the job object in a $job
variable, waits for the job to complete and then returns the pipeline and host output.
If you are using PowerShell Universal 2.4 or later, you can use the -Wait
parameter of Invoke-PSUScript
to achieve this.
The integrated mode allows calling these cmdlets from within PowerShell Universal without an App Token or Computer Name. It uses the internal RPC channel to communicate.
You can set the -Integrated
parameter to switch to integrated mode. This parameter does not work outside of PowerShell Universal.
The following cmdlets support integrated mode.
Get-PSUScript
Invoke-PSUScript
Get-PSUJob
Get-PSUJobOutput
Get-PSUJobPipelineOutput
Get-PSUJobFeedback
Set-PSUJobFeedback
Wait-PSUJob
You can call jobs over REST using the management API for PowerShell Universal. You will need a valid app token to invoke jobs.
To call a script, you call an HTTP POST to the script endpoint with the ID of the script you wish to execute.
You can provide parameters to the job via a query string. Parameters will be provided to your script as strings.
You can set the environment by pass in the environment property to the job context. The property must be the name of an environment defined within your PSU instance.
You can set the run as account by passing in the name of a PSCredential variable to the Credential property.
The default behavior for PowerShell Universal is to track jobs based on an autoincrementing int64-based ID. Every time a new job is run, the job is one higher in ID than the last. Because of this behavior, it is easy to guess other job IDs and can potentially lead to a security risk.
In order to avoid this issue, you can enable the JobRunID
experimental feature. Although internally the system still creates jobs with ascending numeric IDs, you cannot access jobs based on those IDs. Instead, a new field called RunID
is used. RunID
utilizes a GUID
rather than an ID for look ups. This greatly reduces the ability for an attacker to guess a job ID.
You will need to enable this feature to use it.
Installation instructions for PowerShell Universal.
The MSI install creates a PowerShell Universal service. By default, PowerShell Universal listens on port 5000. You can navigate to http://localhost:5000
System installs run as a Windows service. User installs run when the user logs in to the machine. The user install runs in the user's context.
The following table contains the parameters you can specify if running msiexec
against our MSI install for automation purposes:
The example below shows how to run msiexec.exe
to install PowerShell Universal and provide parameters to the installer:
You can start Universal by unzipping the contents, unblocking the files and then executing Universal.Server.exe
.
You can use the following command line on Linux to install and start PowerShell Universal:
You can use systemd
to start PowerShell Universal as a service. The below script is an example of downloading a version of PowerShell Universal and installing it as a service:
You can use the PowerShell Universal PowerShell module to install the Universal server. To install the module, use Install-Module
.
To install the Universal server, you can use Install-PSUServer
.
Running this command on Windows creates and starts a Windows service on your machine. Running this command on Linux creates and starts a systemd service on your machine. Running this command on Mac OS downloads and extracts the PowerShell Universal server.
Chocolatey packages for PowerShell Universal are usually available within a week of release but are not available the day of a release.
You can login with the "admin" user and any password.
PowerShell Universal takes full advantage of PowerShell and the PowerShell SDK. It includes PowerShell scripts directly in the product. Consider configuring antivirus to allow execution of PowerShell scripts in PowerShell Universal.
The following directories contain examples from a standard Windows system of scripts and executable files that you may need to exclude from antivirus checks. Changing paths within appsettings.json or within the installer requires changing which directories are excluded.
It may be necessary to exclude certain executables that run PowerShell scripts. The below is a list of executables that run PowerShell from PowerShell Universal.
You can use the $ENV:PSUDefaultAdminName
and $ENV:PSUDefaultAdminPassword
environment variables to change this behavior. These values are only used if no administrator account already exists. This is useful for cloud-based installations.
The PowerShell Universal Agent executes Event Hub actions. Install it depending on your environment:
ZIP files for each platform we support are on our downloads page. Each ZIP contains a PowerShellUniversal.Agent.exe
or PowerShellUniversal.Agent
file that can start an agent. Run the process as a service for it to start whenever the machine reboots.
At this point, Universal is up and running. Visit http://localhost:5000
or your default port to navigate to the admin console. Log in with the default admin name and password or create a default admin account.
PowerShell Universal automated test support.
You can work with tests by visiting Automation \ Tests.
Tests files are located based on file name. Any files found in the respository that end in .Tests.ps1
will be listed in the Test Files tab. You can create new test files on the Automation \ Scripts page.
Tests can be run by clicking the Run Test or Run All Tests buttons. Run Test will run the individual Test Files and Run All Tests will run all the Test Files.
You will have the option to select the environment, credential and computer to run the tests.
Test Results are produced after the test run finishes. You will be able to see the overal status of the test run and the result of individual test suites and cases.
API documentation can be produced for your endpoints by creating a new OpenAPI definition and assigning endpoints to it. OpenAPI is a standard format and can be consumed by tools, such as the or , to create clients. The Swagger dashboard is also integrated into PowerShell Universal to provide interactive documentation.
Event Hubs require a .
You will need to install and configure the to use Event Hubs.
To authenticate with , you can use the -UseDefaultCredentials
parameter of Invoke-RestMethod
and Invoke-WebRequest
. This will perform negotiate authentication whether you are running inside IIS or a service.
You can run in PowerShell Universal. PowerShell Universal integrates deeply with the PowerShell host to provide a UI for param blocks, output rich objects, display progress and even allow the user to provide feedback.
You can schedule jobs to , or even within the PowerShell Universal platform.
Run ad-hoc commands in in any of your configured environments and, optionally, as alternate credentials.
Rate limiting requires a .
You can use Invoke-PSUScript
to invoke jobs from the command line. You will need a valid to do so. Parameters are defined using dynamic parameters on the Invoke-PSUScript
cmdlet.
Variables defined in jobs can be found on the .
MSI downloads are available on our .
You can also download the ZIP from our if you would like to xcopy deploy the files on Windows or Linux.
You can install PowerShell Universal using the . The package runs the MSI install. It installs Universal as a service and opens a web browser after the install.
See the .
Please visit the for information on how to configure PowerShell Universal as an IIS website.
The PowerShell Universal Agent MSI is on our download page. After installing the MSI, a PowerShell Universal Agent service runs on your machine. to connect to PowerShell Universal.
This feature requires a .
PowerShell Universal integrates with the test framework to allow you to execute test suites and view results. Results are stored in the PowerShell Universal database so you can view historical results.
Error
A script had a non-terminating error.
Set ErrorActionPreference to SilentlyContinue
Warning
A script had a warning.
Set WarningActionPreference to SilentlyContinue
Failed
A script had a terminating error.
Handle the terminating error or catch it.
Waiting on Feedback
A script is waiting on feedback, such as Read-Host.
Avoid user callbacks such as read-host.
Running
The script is currently running.
N\A
Queued
The script is currently queued to run.
N\A
INSTALLFOLDER
The installation folder for PowerShell Universal
%ProgramFiles(x86)%\Universal
TCPPORT
The TCP port the HTTP server will be listening on.
5000
REPOFOLDER
The repository folder to save the configuration files to.
%ProgramData%\UniversalAutomation\Repository
CONNECTIONSTRING
The SQL, SQLite, or PostgreSQL connection string.
Data Source=%ProgramData%\UniversalAutomation\database.db
DATABASETYPE
SQL, SQLite, or PostgreSQL
SQLite
STARTSERVICE
Whether to start the service after install (0 or 1)
1
SERVICEACCOUNT
The service account to set for the Windows service. Use the format of domain\username.
None
SERVICEACCOUNTPASSWORD
The service account password to set for the Windows Service. The password will be masked with ***'s in the installer log.
None
TELEMETRY
Anonymous telemetry collection
0
ADDPSMODULEPATH
Adds the PowerShell Universal module directory to the PSModulePath environment variable.
1
STARTSERVICE
Whether to start the service after install.
1
INSTALLTYPE
Whether to perform a server or user install.
Server
%ProgramData%\PowerShellUniversal
Contains log files and appsettings.json
%ProgramData%\UniversalAutomation
Contains PowerShell scripts and artifacts. Contains the single file database when not using SQL integration.
%ProgramFiles(x86)\Universal
Contains PowerShell Universal application executables, libraries and modules.
Universal.Server.exe
The PowerShell Universal core service.
Universal.Agent.exe
The PowerShell Universal agent environment executable.
pwsh.exe
PowerShell 7.x
PowerShell.exe
PowerShell 5.x
Trigger scripts when events happen with PowerShell Universal.
Triggers allow for automation jobs to be started when certain events happen within PowerShell Universal. For example, this allows you to take action when jobs complete, the server starts or dashboards stop. Triggers are useful for assigning global error handling or sending notifications when certain things happen.
The following types of events can be assigned a trigger.
Job Started
Job Completed
Job Requesting Feedback
Job Failed
Dashboard Started
Dashboard Stopped
Server Started
Server Stopping
User Login
Use of a Revoked App Token
API Authentication Failed
API Error
New User Login
Git Sync
License Expired
License Expiring
The user login event takes place when a user accesses PowerShell Universal. The script will receive a $User
parameter with user information.
The user login event takes place when a user accesses PowerShell Universal. The script will receive a $data
parameter with user information. The data structure is shown below.
The app token event takes place when a revoked app token is used. The script will receive a $data
parameter that contains the contents of the app token as a string.
This trigger occurs when a git sync is run. This trigger will fire for both successful and unsuccessful git syncs.
You will receive the following object in the $data
parameter.
Global triggers will start the assigned script whenever the event type is invoked.
For example, the Script.ps1
will be run whenever any job is run.
Resource triggers will start the assigned script when the event takes place on the selected resource.
For example, the Script.ps1
will be run whenever the Dashboard
is stopped.
Whenever a job is started from a trigger, it will be provided with metadata about object that caused the event to trigger.
Triggers related to jobs will be provided a $Job
parameter.
Triggers related to dashboards will be provided a $Dashboard
parameter.
Triggers related to the server status will not receive a parameter.
Using the -Condition
parameter of New-PSUTrigger
, you can determine whether or not a trigger should be run based on local conditions on the server. Return $true
or $false
from the condition.
For example, you can disable a trigger if the Environment
environment variable is not set to production
.
This document covers upgrading the PowerShell Universal application.
This document will cover the upgrade process for production PowerShell Universal instances. We will cover the following topics.
Data Backup
Upgrade Process
Upgrade Validation
The Universal application binaries can generally be upgraded without having to change the configuration or database manually, but we do recommend backups of production data.
For production environments, we recommend deploying PowerShell Universal to a staging or development environment prior to major upgrades. This allows for testing before end users are affected. You can use Development Licenses to test changes in PowerShell Universal instances without purchasing another license.
PowerShell Universal uses a script-based configuration system alongside a database used for retention of entities such as app tokens, job history and identities. If possible, you will want to backup these items before running an upgrade for easy rollback in case an issue is encountered during validation.
Backing up the database ensures that all apptokens, job history, identities and database secrets are retained in the case of an upgrade failure. SQL databases also may adjust the schema of the database and may require a rollback of not only the data, but also the schema of the tables in the database.
By default, PowerShell Universal uses a single file database called SQLite. Unless configured otherwise, the database is stored in %ProgramData%\UniversalAutomation
. You should have a database.db
and possibility a database-log.db
. Both of these files should be backed up. The service must be stopped in order to back up the files.
When using SQL for persistence, backup the entire database (including schema). There isn't necessarily a need to stop the PowerShell Universal service when backing up the database, but it may continue to write to the database (for example when running scheduled jobs) after the backup has been completed.
Scripts make up the main configuration data to backup when upgrading a production PowerShell Universal instance. For production, we recommend using a version control system. You can also take advantage of the built-in git integration. If you are using a two-way sync for PowerShell Universal git integration, consider tagging your git branch prior to the upgrade to allow for easy rollback to unexpected changes within the git repository.
Below are sections for each type of system upgrade and the steps that you should take based on how you originally installed PSU.
When installing via the MSI, you will want to follow the same backup procedures above.
You will want to back up the appsettings.json
file stored in %ProgramData%\PowerShellUniversal
. This file contains information such as port, data storage location and other server settings. Typically, the MSI will not make changes to this file once created. It will use the settings found for the upgraded version. That said, if necessary, the MSI will make changes to the appsettings file. These changes are considered breaking and will be listed in the changelog for the release.
When running an MSI upgrade, the PSU service is not uninstalled, and thus, the service account will still be set once the service starts up.
Once all the configuration files and the database are backed up, you can run the new MSI installer.
For major upgrades (e.g. v4.3.4 to v5.0.4 etc), you will need to uninstall the previous version prior to running the new version.
The installer may prompt for a restart of the machine if files are locked. The PSU MSI will uninstall all the files in the installation directory and install entirely new files.
Once the MSI has completed, you can navigate to your PowerShell Universal admin console to perform installation validation.
Below you will find information about upgrading an IIS install.
In addition to the files listed to backup above, you will also want to consider backing up your web.config
file. If you have made no changes to this file, you do not need to back it up.
The web.config
file that is included in the application installation directory will be overwritten during upgrades. If you have moved your web.config file to an alternate location, it will not be overwritten. When creating an IIS website, you can simply include the web.config
file in the web app's directory and have the binaries stored in a different location.
When upgrading with IIS, you will need to first stop your application pool to ensure that the binaries used by IIS are no longer in use and then replace the binaries with the new ones. To ensure that the upgrade works as expected, it's recommended to delete all the application files and then unzip the new ones into the same directory to avoid assembly conflicts.
Once you have copied the new files and unblocked them, start the app pool, navigate to the PowerShell Universal Admin Console and perform installation validation.
The Universal module can be used to upgrade installations of PowerShell Universal previously installed by the module.
Do not use the Universal module to upgrade instances installed via MSI.
Follow the backup procedures above and then perform the upgrade.
First, upgrade the local PowerShell Universal module and verify the expected version is installed.
Next, run Update-PSUServer
to download and unzip the new PSU instance.
After the upgrade is complete, navigate to the PowerShell Universal Admin Console and begin upgrade validation.
Perform the necessary backup procedures and download the latest ZIP of PowerShell Universal.
Stop the PowerShell Universal service. Delete the existing PowerShell Universal application files. Extract the ZIP files to the same directory. Finally, run Unblock-File
against the directory to ensure that PSU can execute properly. Always run this command as administrator.
After the upgrade is complete, navigate to the PowerShell Universal Admin Console and begin upgrade validation.
After running an upgrade, you should perform basic validation against your PSU server to ensure that it is fully functional.
Verify that there are no errors within the notification drop down. They may be a sign of issues during the upgrade.
Upgrades to PowerShell Universal may change assembly versions of DLLs shipped with the platform. This can cause other modules to fail to load. While this may not be obvious at first, you may consider taking an inventory of modules used in your platform to ensure that the versions are consistent before and after the upgrade to limit changes.
If you have installed a version of the Universal
module outside of PowerShell Universal (for example, with Install-Module
), you must make sure to update the module or it can conflict with the new one installed with PowerShell Universal.
The most common upgrade issues come due to changes in the Universal App framework. Apps can be complex and bug fixes or features can sometimes cause for certain user's app while fixing issues pertaining to another user's app. Please read the changelog before upgrading to understand the impact of changes made to the app framework and consider testing the app with development data before upgrading in production.
When using nightly builds, you cannot upgrade from one nightly version to another. You can upgrade from a generally available version to a nightly version. In order to test a new nightly build, you will need to uninstall the current nightly build, rollback the database schema and then install the new version. You can roll back the database schema with psu.exe
.
The most common upgrade issue is that Unblock-File
is not called properly on the extracted files when performing an upgrade of a IIS ZIP install. Also make sure to run the Unblock-File
command recursively and from within an administrative session.
Another command issue is extracting the files over the top of the existing files. This can cause assembly conflicts and puts the application in an unknown state. Follow the IIS upgrade documentation and delete the files before extracting them.
When new functionality is added to PowerShell Universal it is typically done using new cmdlets. If older versions of the PowerShell Universal module are installed on the system, it can cause conflicts with the one shipped within the installation media. Ensure that you have removed older versions of the Universal
module if you encounter these errors.
This can happen if SQL schema upgrades are not being run during upgrades. If you set the RunMigrations
setting to false
in appsettings.json
, you must run the migrations manually or the PowerShell Universal service will not function properly.
These changes can be visual or functional. Please ensure that you review the changelog for items that may be related to the change you are seeing. Consider posting the forums or opening a GitHub issue to see if the issue is as designed and if there is a viable workaround.
The licensing model of PowerShell Universal provides licensed users the ability to upgrade to whatever is the newest version as long as they have an active perpetual or subscription license. If you attempt to upgrade a server that is no longer within the license window, the server will not function as expected. You will need to downgrade back to the previous version to restore functionality.
Additionally, you may encounter issues due to the PSU service restart. When the service starts, it verifies license subscription status. If it fails to do so, it may not be licensed properly and cause other issues. The root cause is typically networking issues while attempting to access the IronmanSoftware.com website for activation. Offline license keys do not contact the IMS website for activation and will not encounter this issue.
The drag and drop page designer has been removed in favor of Portal Pages and Widgets.
The drag and drop page designer for apps has been removed. Apps created with the designer will still function.
Access Controls have been removed in favor of Permissions. You can also use the Portal to assign resources, like scripts, to users without the need for complicated permissions.
Prior to v5, cmdlets would send data over HTTP or by using an internal gRPC channel. Now, all cmdlets use an externally facing gRPC Channel that is protected by authentication and authorization. It no longer uses standard REST API HTTP calls.
This can be a problem for PowerShell Universal instances behind reverse proxies and requires that the proper header values are sent.
Please review the Module documentation for more information.
The cmdlet you are calling does not have access to the PowerShell Universal APIs. You will need to specify an -AppToken parameter on the cmdlets in order to use them.
You can also enable the permissive API security model to allow internally called cmdlets from PowerShell Universal without the need for authorization.
The cmdlets are unable to determine how to call the PowerShell Universal APIs. You will need to either specify a -ComputerName parameter or setup the API URL in appsettings.json.
If you are using a self-signed certificate, you will need to specify the -TrustCertificate parameter of the cmdlets.
The Windows PowerShell 5.1 environment no longer uses PowerShell.exe directly. It instead uses a .NET Framework version of the Universal.Agent.exe executable. This allows for the greatest compatibility with PowerShell Universal libraries and other modules. The agent still uses the PowerShell assemblies found on the executing machine.
PowerShell.exe is no longer supported. It can be used in minimal environments.
The default PowerShell 7 environment uses a .NET version of Universal.Agent.exe executable running PowerShell 7.4. This allows for the greatest compatibility with PowerShell Universal libraries and other modules.
It's still possible to use the pwsh.exe process in custom environment configurations.
If you are hosting in IIS, ensure that you install the .NET 8.0 hosting bundle.
The integrated environment now uses PowerShell 7.4.
SQLite is the default persistence method. You will need to perform a manual conversion from LiteDB before installing version 5.
LiteDB has been removed as a supported database engine. Included with the PowerShell Universal installation files, you will find psudb.exe
. It can be used to convert a LiteDB database into a SQLite database. Use the following command line.
The tool will create a database.bak
file before performing the conversion. Progress will be reported in the console.
In order for the PowerShell Universal installer to run successfully, you will need to update the database before running the MSI installer. Below are the steps to take to do so.
Download the ZIP package for Windows and extract to a local directory.
Stop the PowerShell Universal service
Run the psudb.exe command from the ZIP directory, as stated above, to convert the database file in %ProgramData%\UniversalAutomation
Update the %ProgramData%\PowerShellUniversal\appsettings.json file to use the SQLite plugin rather than the LiteDB plugin
Run the PowerShell Universal v5 installer to upgrade the application files.
Desktop mode has been removed. Resources such as hot keys, file associations and shortcuts are no longer supported. The MSI now supports User scope installs that will run as the current user and start upon login.
In previous versions of PowerShell Universal, this command would install to a directory and create the service manually. This command now installs from MSI. If you previously installed with this module, you will need to remove the existing install with a previous version of the module and then install with the new version of the module.
Open a new command prompt and run the following.
PowerShell Universal no longer supports storing the git repository directly in the database. We recommend using a remote git provider like GitHub, GitLab, or Gitea. PowerShell Universal v5 does support local git repositories without the need to sync to a remote. This allows for storing file history directly on the PowerShell Universal server.
Maps no longer support heatmaps or marker clusters.
Components are the building blocks of a PowerShell Universal app.
A Universal app is composed of components. When building an app, you can simply call the PowerShell cmdlets within your app script to create a new component.
There are over 50 components that you can use in your apps. Some of the commonly used components include:
Surfaces
Schedule scripts to run in PowerShell Universal.
Assign schedules to scripts to define frequency and other parameters for a script, such as run as credentials.
To schedule a job, go to the Automation / Schedules page and click the New Schedule button. To schedule a script, go to the script's page and click Schedule.
You can define schedules based on simple selections like Every Minute or Every Hour, or you can define CRON expressions yourself for more configurable schedules. You can also run One Time schedules that run once at a later date.
You can also define under which user the scheduled job runs, as well as which PowerShell version it uses.
Simple schedules are really just helpers for various standard CRON schedules. When you select one, it defines a CRON schedule for your script.
CRON schedules use CRON expressions to define schedules. PowerShell Universal takes advantage of Chronos. For examples of valid expressions, click here.
One-time schedules will run once in the future. You can select the time and day of when they will run.
Continuous schedules run over and over again. You can define a delay between each scheduled job run.
Schedules support setting parameters for scripts. For example, if you have a script that accepts a parameter, you can choose to pass a value to the parameter during the schedule.
Within the modal for defining the schedule, you can set the parameter value.
When editing schedules from PowerShell, you can define the parameters on the New-PSUSchedule
cmdlet. This cmdlet accepts dynamic parameters so that you can pass the values in for your schedule.
When creating a schedule, you can specify the environment for your job to run. By default, it will use the default environment. You can define an environment in the UI by using the Environment drop down. You can define an environment using the -Environment
parameter in New-PSUSchedule
.
You can define as which user to run the schedule by using the Run As selector in the UI. The Run As selector contains a list of PSCredential variables you have defined. You need to define a PSCredential variable before the Run As selector is visible. By default, scheduled jobs run under the credentials of the user that is running PowerShell Universal.
You can define a Run As user in a script by using the -Credential
parameter. The value should be the name of the variable that contains your credential.
You can select the computer or computers on which to run the schedule. By default, schedules run on any available computer. If you select All Computers, the schedule runs on all computers connected to the PSU cluster. If you select a specific computer, the schedule runs on only that computer.
You can define conditions that determine whether a schedule should be run. This is useful if you are using the same repository scripts for multiple environments. Currently, conditions cannot be defined within the admin console. Conditions are passed to the current script and schedule as parameters. The condition scriptblock runs within the integrated environment.
The condition needs to return true or false. Below is an example of a condition where the schedule only runs if there is an environment variable named Slot
that contains the value production
.
You can pause a schedule by setting the Paused property. When a schedule is paused, it does not run. This is useful to stop a schedule from running without deleting it.
You can set a time out for scheduled jobs. The time out is the number of minutes before the scheduled job is canceled.
The Random Delay property causes a schedule to start anywhere between 0 and 60 seconds from the scheduled time. This is useful when running many schedules at the same time. For example, if you had 10 schedules that start at midnight, you may want to set a random delay to limit resource contention on the PowerShell Universal service.
In multi-branch environments, it may be necessary to avoid running schedules based on the branch that is loaded in PowerShell Universal. You can use the -AvailableInBranch
option on New-PSUSchedule
to avoid having a schedule run when running in a certain branch. This value is also available in the admin console under the schedule settings when git is enabled.
PowerShell scripts to execute within PowerShell Universal.
You can create PowerShell scripts within PowerShell Universal to execute manually, on a schedule, or when events happen within the platform. They are stored on disk and they persist to a local or remote Git repository.
To add a new script, click the New Script button within the Automation / Scripts page. There are various settings you can provide for the script.
This is the name of the script as shown in Universal Automation. This is also the name used to persist the script to disk. The name needs to be unique within the current folder.
See Modules and Commands below.
This description of the script shows in various places within the UA UI and is returned by the Universal cmdlets.
This prevents a script from running manually. This is enforced in the UI as well as the web server and cmdlets.
The max job history defines the amount of jobs stored when running this script. It defaults to 100. Jobs are also cleaned up based on the server-wide job retention duration setting from the Settings / General page.
The error action changes how the script reacts when it has an error. By default, terminating and non-terminating errors are ignored and the script always succeeds. You can change this setting to stop to cause scripts to fail immediately when an error is encountered.
If you wish to write errors directly to the error pane without causing changes in how the script is handled (for example in an exception handler), use Write-PSUError
to output the error record and it appears in the job's error tab.
This allows you to define the required PowerShell environment for the script. By default, it uses the server-wide default PowerShell environment. PowerShell environments are automatically located the first time the Universal Server starts up or read from the environments.ps1
file. You can also add Environment on the Settings / Environments page.
The number of minutes before the script times out. The default value of 0 means the script will run forever. Once a script reaches its timeout, it is canceled.
When checked, this disables pipeline output storage. This greatly reduces jobs' CPU and storage overhead, but the script still writes to the information, warning, and error streams.
Defines the maximum concurrent jobs with which the script can be run. It defaults to 100.
You can run a script in the UI from the Automation / Scripts page by clicking Run or by clicking View and then Run. In each case, the Run Dialog appears, allowing you to select various settings for the job.
PowerShell Universal automatically determines the parameters as defined within your scripts. It takes advantage of static code analysis to determine the type, default values and some validation that is then presented within the UI.
For example, you may have a script with the following parameters:
The result is a set of input options based on the types of parameters.
You can run scripts as another user by configuring secret variables. PowerShell Universal uses the Microsoft Secret Management module to integrate with secret providers. See variables for more information on secrets.
Create a new PSCredential secret variable.
Click Platform \ Variables and then click Create Secret. Select the PSCredential variable type. Enter the username and password. Ensure that the Disable Run As Support value is unchecked.
Run the Script and select the credential
Navigate back to Automation \ Scripts and click the Run Script button. Select an environment besides the Integrated environment. By default, this will be either PowerShell 7 or Windows PowerShell 5.1.
You will now be prompted with the Run As drop down to select the credential. From there, you can select the credential within the run dialog.
You can use the Computer dropdown to select other machines on which to run a script. The default value is to run on any available computer.
You can run a script on all computers by selecting the All Computers option from the Computer dropdown.
PowerShell Universal uses a least-busy server loading balancing algorithm. If more than one server is a valid target for a job, PowerShell Universal will select the server with the least number of jobs running on that server.
You can use PowerShell remoting by taking advantage of Invoke-Command
. PowerShell Universal does not support the use of Enter-PSSession
or Import-PSSession
.
You can use comment-based help to define the description, a synopsis, parameter-based help, and links for your scripts. These will be displayed within the PowerShell Universal UI.
The above yields the following user interface. The synopsis displays as the short description, and a longer description displays in the description section. Links appear under the description.
Commands and cmdlets found in modules can be used as the target for scripts rather than authoring the script directly.
Let's assume that we have a module called PSUModule
that contains the following function.
It's possible to expose this function as a script by using the following syntax in scripts.ps1
.
The function surfaces just like other scripts within the admin console. Parameters, help text and other PSU features work the same as with scripts.
Using a script's job history, PowerShell Universal will provide basic statistics about the execution of the script. These include success rate, average execution time, and breaks downs of environment, user and computer execution.
While it's possible to start jobs using Invoke-PSUScript, it may be desirable to start a job using the PowerShell Start-Job cmdlet. Using Start-Job does not register the job with PowerShell Universal and the execution information will not be present in the jobs table.
The Integrated, PowerShell 7 and Windows PowerShell 5.1 environments are not compatible with Start-Job because they are custom PowerShell hosts. In order to use Start-Job, you will need to configure a custom PowerShell environment. Click Settings \ Environments. Next, click Create New Environment. Name the environment, select the Custom environment type and specify pwsh.exe as the executable path.
You will now be able to use this environment to run the Start-Job cmdlet.
Parameters for PowerShell Universal jobs.
Jobs support automatically generating forms with parameters based on your script's param
block. The type of control will change based on the type you define in the block. Parameters that are mandatory will also be required by the UI.
Parameters can be simply defined without any type of parameter attribute and they will show up as text boxes in the UI.
Universal supports various types of parameters. You can use String, String[], Int, DateTime, Boolean, Switch and Enum types.
You can define string parameters by specifying the [String]
type of by not specifying a type at all. Strings will generate a textbox.
You can specify string arrays by using the [String[]]
type specifier. String arrays will generate a multi-tag select box.
You can use the [DateTime]
type specifier to create a date and time selector.
You can use a [Bool]
type selector to create a switch.
You can define a number selector by using the [Int]
type specifier.
You can define a switch parameter using the [Switch]
type specifier to create a switch.
You can use System.Enum values to create select boxes. For example, you could use the System.DayOrWeek
to create a day of the week selection box.
When you specify a PSCredential
, the user will be presented with a drop down of credentials available as variables.
You can allow users to upload files by using the [File]
type.
Files will be available as a PSUFile
object in your scripts. This object has a byte[]
array that you can use to process the file.
For example, you can get the string content for the file by converting it using the Encoding classes.
You can use the DisplayNameAtrribute
to set a display name for the script parameter.
You can define help messages for your parameters by using the HelpMessage
property of the Parameter
attribute.
You can use the Parameter attribute to define required parameters.
You can use both static and default values for parameters. The default value is calculated when the job is about to be run.
The ValidateSet
attribute is used to enforce which values can be passed to a parameter. Learn about ValidateSet here. PowerShell Universal will automatically create a drop-down menu with the values provided to the ValidateSet
attribute for parameters.
For example, a script could define a param block as follows.
The result is shown below.
You can pass parameters from PowerShell using the Invoke-PSUScript
cmdlet. This cmdlet supports dynamic parameters. If you have a param
block on your script, these parameters will automatically be added to Invoke-PSUScript
.
For example, I had a script named Script1.ps1 and the contents were are follows.
I could then invoke that script using this syntax.
The result would be that Hello was output in the job log and pipeline.
PowerShell Universal supports parameter sets. When a parameter set is defined, a drop down is provided that allows for switching between the sets.
In-browser PowerShell terminals.
Terminals are in-browser PowerShell consoles that you can execute arbitrary commands within. Terminals are configured to target an environment that you select and can optionally us Run As credentials to run as other users. The history of terminals is maintained within the PowerShell Universal database. You can reconnect to disconnected terminals as long as they haven't timed out.
You can configure a new terminal by navigating to Automation \ Terminals and clicking Create New Terminal. You'll be able to select the environment and credential to run the terminal as.
To use a terminal, click the Open Terminal button for the terminal you wish to launch. Depending on your configuration, this may start a new PowerShell process based on the environment you selected.
Once the terminal has launched, you'll be able to issue commands.
To stop a terminal, you can navigate to the terminal instances tab on the Terminals page. Click the trash can to stop the terminal.
If you navigate away from PowerShell Universal, the terminal will go idle. You can reconnect to a terminal by clicking the Open Terminal button for the idle terminal instance.
Terminals will time out automatically after 5 minutes. You can customize the timeout by setting the -IdleTimeout
parameter of New-PSUTerminal
.
Terminal history can be enabled per terminal configuration.
When terminal history is enabled, you will be able to view the history of all commands that were executed within the terminal. Click the View Command History button for the instance in question.
You will be able to review what the command was that ran, when it was ran, who started the terminal and what the output of the command was.
Apps are the root component for your web page.
The first step is to create an app in PowerShell Universal. This is the container for all your pages and components for your app website. We recommend running apps in external environments, like PowerShell 7, to ensure they are isolated from the rest of the server. To create an app, click Apps \ Apps and then Create New App.
You will need to provide a unique name and URL for the app when creating it. All other properties are optional. After creating the app, you can edit the app code. For example, try adding a component to your app.
Once you've saved your app, you can click the globe icon to view the result. Clicking the button will display a toast message in your browser.
The top-level cmdlet for dashboards is New-UDApp
. You need to call it when returning an app. You can use it with or without pages.
The content of the app is a series of components to display on the page. It's a script block that will return all the components in the order they will be rendered on the page. You can use the Grid component to layout items and display things like text with typography.
You can customize the header of the app using several parameters.
To change the navigation layout, use the -Navigation
and -NavigationLayout
parameters.
Components are the individual widgets that you can place on you app. There are components for displaying data, taking user input, adding text and images and more. Components can be downloaded as PowerShell modules and added to your app.
Components are be caused using the standard verb-name syntax for any PowerShell cmdlet.
Learn more about components here.
You can specify multiple pages within an app. Each page defines a route. As for v3, all pages are dynamic. PowerShell will execute on each page load to render the new page. Since UD is a single page application, the web browser does not need to refresh the entire web page when navigating between the different app pages.
Learn more about Pages here.
Apps will automatically have access to any commands available within the PSModulePath as well as modules you load directly into the app itself. That said, you can also define functions within the app itself. These functions will be included with a module that is stored alongside your app code. Any functions defined within this file will be automatically included with your app.
Within the PowerShell Universal admin console, define functions, variables and aliases in the Module tab. Any functions defined with be written to a PSM1
file in the same directory as the application code.
Built-in variables can be found on the variables page.
When building an app, you will likely run into issues with cmdlet calls or syntax. Apps will auto reload as you make changes to the app files. If an app fails to start, you can navigate to the admin page, click Apps and click the Info button next to your app.
The Log tab will show all the logging coming from the PowerShell execution from within in your app. This should allow you to easily see errors and warnings coming from your app.
You can use Write-Debug
to add additional log messages to your app. To enable debug logging, you will have to set the $DebugPreference
variable at the top of your app script.
You can customize the appmenu by using the -Menu
parameter.
Similar to jobs, apps can run in separate PowerShell processes. You can start and stop an app process by clicking the Start or Stop button from the Apps page.
Persistent runspaces allow you to maintain runspace state within your app event handlers. This is important for users that perform some sort of initialization within their endpoints that they do not want to execute on subsequent calls.
By default, runspaces will be reset after each execution. This will cause variables, modules and functions defined during the execution of an endpoint.
To enable persistent runspaces, you will need to configure an environment for your API. Set the -PersistentRunspace
parameter to enable this feature. This is configured in the environments.ps1
script.
You will need to ensure that the environment is used by the app.
You can automatically grant app tokens to users that visit apps. This is useful if you want to invoke the management API for PowerShell Universal from within an app. Your app will need to have authentication enabled and you will have to use the -GrantAppToken
switch parameter on New-PSUDashboard
.
From within your app, you can now invoke the management API without having to worry about app token management. The API will be invoked in the context of the user that is visiting the app.
By default, apps will display a toast message when an error is generated within an endpoint script. To avoid this behavior, you can use the -DisableErrorToast
parameter of New-UDApp
When starting an app, information about the variables and modules is displayed within the app log. If you wish to suppress this information, you can use the -DisableStartupLogging
parameter.
Create web applications in PowerShell using PowerShell Universal Apps.
Disparate Data Grouped into Tables and Data Grids
End User Onboarding Tools
UI Tools for System Management without Elevated Credentials
Apps should be considered an advanced feature and require both knowledge of the PowerShell Universal app cmdlets as well as a decent knowledge of PowerShell itself. With that said, they provide the greatest level of customization for web apps in PowerShell.
Icon component for Universal Apps
We include FontAwesome v6 with PowerShell Universal. You can use Find-UDIcon
to search through the list of included icons.
The UniversalDashboard.FontAwesomeIcons
enum should not be used and is only included for backwards compatibility. Many of the icons are no longer a part of FontAwesome 6.
Create icons by specifying their names. You can use the icon reference below to find icons.
Set the size of the icon. Valid values are: xs
, sm
, lg
, 2x
, 3x
, 4x
, 5x
, 6x
, 7x
, 8x
, 9x
, 10x
Rotate icons. The value represents the degrees of rotation.
Add a border to your icon.
Apply CSS styles to your icon.
Apps are individual websites created with PowerShell Universal. They take advantage of a large set of pre-defined cmdlets for and . You can use apps to create dynamic websites that meet any need you can think of. Some common apps are:
For a full list of real-life examples, .
icons to include in your app. Icon names are slightly different than those shown on the FontAwesome website. For example, if you want to use the network-wired
icon, you would use the following string.
Date and time component for Universal Apps.
The New-UDDateTime
component is used for formatting dates and times within the client's browser. By using the client's browser, you can format the time based on the local time zone and locale settings for the user.
The output of New-UDDateTime
cannot be used with components like New-UDHtml
, New-UDMarkdown
or Show-UDToast
. The object returned by New-UDDateTime
needs to run JavaScript within the browser and is not an actual DateTime object.
The date and time component uses DayJS. For a full list of custom formatting options, visit the DayJS documentation.
By default, the date and time will be formatted using the LLL
localized formatting template.
Resulting output: August 16, 2018 8:02 PM
You can specify custom formatting strings using the DayJS formatting template.
Resulting output: 25/01/2019
You can specify the locale to display the date and time in.
Resulting output: 13 de septiembre de 2022 7:30
Tooltip component for PowerShell Universal Apps.
Tooltips display informative text when users hover over an element.
Place the tooltip on top
, bottom
, left
or right
.
Tooltip content can contain any UD element.
Tooltips can be over various types including: "dark", "success", "warning", "error", "info", "light"
Data grid component for Universal Apps.
The UDDataGrid
component is an advanced version of the table that is useful for displaying large amounts of data. It supports many of the same features as the table but also provides complex filtering, row virtualization, multi-column sort and more.
Data grids load their data via the -LoadRows
event handler. You will need to return a hashtable that contains the row data and the total number of rows.
Columns are defined using hashtables.
Columns are customizable using New-UDDataGridColumn
. More information on this cmdlet can be found here.
You can render custom components in columns by specifying render
within the column hashtable. You can access the current row's data by using the $EventData
or $Row
variable
In this example, the number is shown in the name column with a New-UDTypography
component.
Column fluidity or responsiveness can be achieved by setting the flex
property of a column.
The flex
property accepts a value between 0 and ∞. It works by dividing the remaining space in the grid among all flex columns in proportion to their flex
value.
For example, consider a grid with a total width of 500px that has three columns: the first with width: 200
; the second with flex: 1
; and the third with flex: 0.5
. The first column will be 200px wide, leaving 300px remaining. The column with flex: 1
is twice the size of flex: 0.5
, which means that final sizes will be: 200px, 200px, 100px.
To set a minimum and maximum width for a flex
column set the minWidth
and the maxWidth
property on the column.
The -LoadRows
parameter is used to return data for the data grid. Table state will be provided to the event handler as $EventData
. You will find the following properties within the $EventData
object.
Filter
A filter object that you can use to construct filters against your data.
Hashtable
Page
The current page. Starts at 0.
Integer
PageSize
The number of records in a page.
Integer
Sort
The sort options for the table
Hashtable
To implement paging, you can access the page
and pageSize
properties of the $EventData
variable. If you are using a remote data source, you will want to implement custom paging logic. Below is an example of using the paging properties to page through the rows locally. Depending on your data source (e.g. SQL), you may page through data differently.
Out-UDDataGridData
automatically implements paging, and you do not need to do the above if you have all your data in memory. The above is just used for demonstration purposes.
The data grid supports filtering data based on each column. Multiple filters can be defined to allow the user to narrow down the displayed data set. By default, the Out-UDDataGridData
cmdlet implements filtering for local data. When using a remote data source, like SQL, it is suggested to implement custom filtering to improve user experience and performance.
The filter object is included in the $EventData
for the -LoadRows
event handler when a filter is defined. The object has a structure as follows.
The items property contains a collection of fields, operators and values. You can use these to filter your data.
Field
The name of the field to filter
String
Operator
The type of operator to use when filtering the data.
String
Value
The value used to filter
Object
The logic operator field is used to specify the link between the filters. This can be and
or or
.
Contains a collection of quick filter values that you can chose how to apply to your data.
Contains the logic operator for the quick filter values specified by the user. This can be and
or or
.
The Out-UDDataGridData
cmdlet provides an implmentation of filtering for static data. If you use this cmdlet, you do not need to implement filtering manually. If you have a remote data source, you will want to provide a custom implementation for filtering. Below is an example of using the filter structure in $EventData
to eliminate rows based on the filters provided by the user.
The quick filter is a similar to a simple search box. You can enable quick filtering with the -ShowQuickFilter
parameter on New-UDDataGrid
. A search box will appear in the top right of the data grid. When the user enters a value in the data grid, the quick filter information will be provided.
Below is an example of how to use quick filters. Out-UDDataGridData
implements quick filtering and is not required when using local data. The below is done for demonstration only.
The $EventData
object will contain a Sort
property when the user sorts the data grid. It contains properties for each column that is sorted. The properties will start as 0 and increment as more columns are sorted.
For example, you can access the first sorted column as follows.
You will also receive the sort direction for each column.
Field
The field to sort.
String
Sort
The direction to sort the field.
asc, desc
You can use the -LoadDetailContent
event handler to display additional information about the row you are expanding. Information about the current row is available in $EventData.row
.
You can use the -LoadDetailContent parameter to look up nested data about an object. In this example, we load a data grid of virtual machines and display the name, operating system, memory and CPU cores. Expanding the detail content provides a data grid of the network cards available on the virtual machine. We are using dummy data in this example but you could use any cmdlet available to PowerShell Universal.
Tables provide editor support by specifying the -OnEdit
event handler. The new row data will be provided as $EventData
. You can chose to return updated row information (for example, adjusting something the user has entered) and return it from the event handler. If you do not return anything, the row will reflect what the user entered.
The $EventData
has the following format.
Ensure that you provide the editable
property to each column you wish for the user to edit.
You can enable row selection using the -CheckboxSelection
parameter to display checkboxes for the rows to select. Row selection requires a deterministic ID for the data rows provided. In the below example, you will see each row has a specific ID specified.
You can access selected data with the -OnSelectionChange
event handler or by retrieving the row IDs via Get-UDElement
.
To override the default export functionality, use the -OnExport
event handler. $EventData
will be the same context object used for -LoadRows
. You should use Out-UDDataGridExport
to return the data from -OnExport
.
When using a custom export, you can use the -ExportOptions
parameter to define multiple export types. When the user selects the export type, you can check the Type property of $EventData
to determine which type of export to produce.
In this example, we generate an array of 10,000 records. We will create a new function, Out-UDDataGridData
to manage the paging, sorting and filtering. This function is already included in the Universal module.
In this example, we'll query the PowerShell Universal database with dbatools.
Tree view component for Universal Apps.
New-UDTreeView
allows you to create a tree of items and, optionally, dynamically expand the list when clicked.
Create a basic tree view by using the New-UDTreeNode
cmdlet.
Dynamic tree views allow you to run PowerShell whenever a node is clicked. You can then return a list of nodes that should be rendered underneath the clicked node. You can also take other actions such as opening a modal or showing a toast.
List component for Universal Apps.
Lists are continuous, vertical indexes of text or images.
Lists are a continuous group of text or images. They are composed of items containing primary and supplemental actions, which are represented by icons and text.
You can define an action to take when an item is clicked by using the -OnClick
parameter of New-UDListItem
.
Typography component for Universal Apps
Use typography to present your design and content as clearly and efficiently as possible.
Too many type sizes and styles at once can spoil any layout. A typographic scale has a limited set of type sizes that work well together along with the layout grid.
You can use the -Style
parameter to define colors for your text.
Backdrop component for Universal Apps.
The backdrop component places an overlay over the drop of the entire page. It's useful for displaying loading states.
To create a basic backdrop, you can use the New-UDBackdrop
cmdlet and include content to show within the backdrop. The content will be centered on the page. To show the backdrop, use the -Open
switch parameter.
The backdrop provides an -OnClick
handler that you can use to close the backdrop when clicked. You can use Set-UDElement
to open and close the backdrop.
Map component for Universal Apps.
The UDMap component is a robust control that provides a huge set of features. You can select base layers, configure togglable layers, set markers, define vectors and interact with other Universal App components.
This basic map defines a simple base layer using the wmflabs.org tile server. You can use your own custom tile server by specifying a URL. The map is position over Hailey, Idaho.
You can enable the layer control by using the New-UDMapLayerControl
cmdlet. This map defines several layers with components that you can toggle on and off. You can only have one base layer selected as a time. Map overlay layers can toggle on and off.
Markers are used to highlight particular locations.
You can specify custom icons for markers using the -Icon
parameter.
You can create a popup when clicking the marker by using the -Popup
parameter and the New-UDMapPopup
cmdlet.
Maps provide a series of interactive capabilities for add components to and manipulating the map.
You can use styling by using the -Sx
parameter of New-UDTypography
. For example, to apply the secondary text color, you can use the following syntax.
Display an image based on a URL. You can host URLs using .
Charting components for Universal Apps.
Universal Apps provides several built-in charting solutions to help visualize your data retrieved from PowerShell.
Universal Apps integrates with ChartJS.
To create a chart, use New-UDChartJS
and New-UDChartJSData
. The below chart shows the top ten CPU using processes.
A bubble chart consists of x and y coordinates and an r value for the radius of the circles.
Colors can be defined using the various color parameters of New-UDChartJS
.
By default, you do not need to define data sets manually. A single data set is created automatically when you use the -DataProperty
and -LabelProperty
parameters. If you want to define multiple data sets for a single chart, you can use the -Dataset
property in conjunction with New-UDChartJSDataset
.
You can take action when a user clicks the chart. This example shows a toast with the contents of the $Body
variable. The $Body
variable contains a JSON string with information about the elements that were clicked.
You can use New-UDDynamic
to create charts that refresh on an interval.
Monitors are a special kind of chart that tracks data over time. Monitors are good for displaying data such as server performance stats that change frequently. You return a single value from a monitor and it is graphed automatically over time.
The New-UDChartJS
cmdlet supports accepting advanced ChartJS options. You can use the -Options
parameter to pass in a hashtable.
This example hides the legend.
You can include a title with the title option.
Universal Dashboard integrates with Nivo Charts. Below you will find examples and documentation for using these charts.
All the Nivo charts can be created with New-UDNivoChart
. You will specify a switch parameter for the different types of charts. Each chart type will take a well defined data format via the -Data
parameter.
Nivo provides the ability to specify patterns to display over data sets. You can configure these patterns with New-UDNivoPattern
and New-UDNivoFill
.
Nivo charts provide responsive widths so they will resize automatically when placed on a page or the browser is resized. A height is required when using responsive widths.
Like many components in Universal Dashboard v3, Nivo charts do not define auto-refresh properties themselves. Instead, you can take advantage of New-UDDynamic
to refresh the chart on an interval.
Nivo charts support OnClick event handlers. You will be provided with information about the data set that was clicked as JSON.
Assuming the below JSON data, you can use the following app code.
You can use the following format to use colors based on your data.
Button component for Universal Apps
Buttons allow users to take actions, and make choices, with a single tap.
Contained buttons are high-emphasis, distinguished by their use of elevation and fill. They contain actions that are primary to your app.
Outlined buttons are medium-emphasis buttons. They contain actions that are important, but aren’t the primary action in an app.
You can control the pixel size of a button based on pixel size by using the Style parameter
Sometimes you might want to have icons for certain button to enhance the UX of the application as we recognize logos more easily than plain text. For example, if you have a delete button you can label it with a dustbin icon.
You can specify a script block to execute when the button is clicked
Loading buttons will display a loading icon while an event handler is running. This is useful for longer running events.
A button group produces a button with a drop down menu. This is also referred to a split button.
This example uses Set-UDElement
to disable the button after performing an action.
Autocomplete component for Universal Apps
The autocomplete is a normal text input enhanced by a panel of suggested options.
Creates a basic autocomplete with a static list of options
When text is typed, it can be filtered with OnLoadOptions
. $Body
will contain the current text that is typed.
This example filters the array with Where-Object
.
$Body
contains the currently selected item. The OnChange event will fire when the user selects one or more items.
You can place an icon before an autocomplete by using the -Icon
parameter.
OnEnter is triggered when the user presses the enter key within the autocomplete.
You can use New-UDAutoCompleteOption
to specify name and values.
Progress component for Universal Apps
Check component for Universal Apps
Checkboxes allow the user to select one or more items from a set.
Checkboxes can be disabled and checked by default
Create checkboxes that use any icon and style.
Create checkboxes that fire script blocks when changed.
You can adjust where the label for the checkbox is placed.
You can use Get-UDElement
to get the value of the checkbox. Get-UDElement
will also return other properties of the checkbox component.
The following example shows a toast message with the value of the checkbox.
A skeleton component for PowerShell Universal Apps.
A skeleton is a form of a loading component that can show a placeholder while data is received.
There are three variants that you can use for a skeleton. You can use a circle, text or a rectangle. You can also define the height and width of the skeleton.
Skeletons will use the pulsate animation by default. You can also disable animation or use a wave animation.
Code editor component for Universal Apps.
You can create a new code editor with the New-UDCodeEditor
cmdlet. Specifying the -Language
parameter will enable syntax highlighting for that language. You will need to specify a height in pixels.
Use the -Code
parameter to specify code that will be populated within the code editor when it loads.
You can retrieve code from another component using the Get-UDElement
cmdlet and accessing the code property of the hashtable that is returned.
You can set code from another component using the Set-UDElement
cmdlet. Specify the code value in a hashtable passed to the -Properties
parameter.
The Monaco editor supports a wide range of options. If you wish to use options that aren't available on the New-UDCodeEditor
cmdlet, you can use the -Options
parameter and pass a hashtable of options instead.
Endpoint configuration for Universal APIs.
Endpoints are defined by their URI and HTTP method. Calls made to the Universal server that match your defined API endpoint and method execute the API endpoint script.
To invoke the above method, you can use Invoke-RestMethod
.
When defining endpoints in the management API, you can skip the New-PSUEndpoint
call, as the admin console defines it.
The only contents that you need to provide in the editor are the script you wish to call.
Endpoints can have one or more HTTP methods defined. To determine which method is used by an endpoint, use the built-in $Method
variable.
URLs can contain variable segments. You can denote a variable segment using a colon (:
). For example, the following URL would provide a variable for the ID of the user. The $Id
variable will be defined within the endpoint when it is executed. Variables must be unique in the same endpoint URL.
To call this API and specify the ID, do the following:
Query string parameters are automatically passed into endpoints as variables that you can then access. For example, if you have an endpoint that expects an $Id
variable, you can provide it in the query string.
The resulting Invoke-RestMethod
call must then include the query string parameter.
When using multiple query string parameters, ensure that your URL is surrounded by quotes so PowerShell translates it properly. Including an ampersand (&) without quotes will cause issues in both Windows PowerShell and PowerShell 7.
Below is an example of CWE-914. Include a $IsChallengePassed
query string parameter to bypass the challenge.
In order to avoid this particular issue, you can use a param
block.
Request headers are available in APIs using the $Headers
variable. The variable is a hashtable. To access a header, use the following syntax:
Request cookies are available in APIs using the $Cookies
variable. The variable is a hashtable. To access a cookie, use the following syntax:
Send back request cookies with the New-PSUApiResponse
cmdlet. Use the -Cookies
parameter with a supplied hashtable.
To access a request body, you will simply access the $Body
variable. Universal $Body
variable will be a string. If you expect JSON, you should use ConvertFrom-Json
.
To call the above endpoint, specify the body of Invoke-RestMethod
.
You can view the live log information for any endpoint by clicking the log tab. Live logs include URL, HTTP method, source IP address, PowerShell streams, status code, return Content Type, and HTTP content length.
You can write to the live log from within your endpoints with cmdlets like Write-Host
.
You can use the Test tab in the Endpoint editor to test your APIs. Using this Test tool, you can adjust headers, the query string, and body. You can also adjust the Authentication and Authorization for the test.
When using the test tab, any changes to the values of the test will result in an updated Code block that you can then use within PowerShell. Click the Code tab to view the test code.
Additionally, tests performed within the tester will be stored for 30 days to allow for retesting without having to reconfigure all the properties. Clicking the Apply button will setup the Test tool with the same properties.
You can pass data to an endpoint as form data. Form data will pass into your endpoint as parameters.
You can then use a hashtable with Invoke-RestMethod to pass form data.
You can pass JSON data to an endpoint and it will automatically bind to a param block.
You can then send JSON data to the endpoint.
You can use a param
block within your script to enforce mandatory parameters and provide default values for optional parameters such as query string parameters. Variables such as $Body
, $Headers
and $User
are provided automatically.
In the below example, the $Name
parameter is mandatory and the $Role
parameter has a default value of Default.
When using the param
block with route parameters like the above example, you must include the route variable in your parameter. If it is not specified, you will not have access to that value.
For example, the following $Name
variable is always $null
. The endpoint always returns false.
Data returned from endpoints is assumed to be JSON data. If you return an object from the endpoint script block, it is automatically serialized to JSON. If you want to return another type of data, you can return a string formatted however you chose.
You can process uploaded files by using the $Data
parameter to access the byte array of data uploaded to the endpoint.
The multipart/form-data
content type is not supported for uploading files to APIs.
You can also save the file into a directory.
You can send files down using the New-PSUApiResponse
cmdlet.
You can return custom responses from endpoints by using the New-PSUApiResponse
cmdlet in your endpoint. This cmdlet allows you to set the status code, content type and even specify the byte[] data for the content to be returned.
You can also return custom body data with the -Body
parameter of New-PSUApiResponse
.
Invoking the REST method returns the custom error code.
You can control the content type of the returned data with the -ContentType
parameter.
You can control the response headers with a hashtable of values that you pass to the -Headers
parameter.
Persistent runspaces allow you to maintain runspace state between API calls. This is important for users that perform some sort of initialization within their endpoints that they do not want to execute on subsequent API calls.
By default, runspaces are reset after each execution. This removes variables, modules and functions defined during the execution of the API.
You can then assign the API environment in the settings.ps1
script.
By default, endpoints will not time out. To set a timeout for your endpoints, you can use the New-PSUEndpoint
-Timeout
parameter. The timeout is set in the number of seconds.
You can define the path to an external endpoint content file with the -Path
parameter of New-PSUEndpoint
. The path is relative to the .universal
directory in Repository.
The content of the endpoints.ps1
file is then this:
There is no UI for creating a C# API, so you need to do so using configuration files. First, create a .cs
file that runs your API.
You will have access to a request
parameter that includes all the data about the API request.
You will also have access to a ServiceProvider
property that allows you to access services within PowerShell Universal. These are not currently well-documented, but below is an example of restarting a dashboard.
Some other useful services include:
IDatabase
IApiService
IConfigurationService
IJobService
You can choose to return an ApiResponse
from your endpoint.
Once you have defined your C# endpoint file, you can add it by editing endpoints.ps1
.
The PowerShell Universal service automatically compiles and runs C# endpoints.
The code editor component allows you to host the editor within your dashboards.
For a full list of options, check the interface.
Avoid using endpoint URLs that match internal PowerShell Universal Management API URLs, as this causes unexpected behavior. You can reference the for the to verify that none of the URLs match.
When accepting input via Query String parameters you may be vulnerable to . Consider using a param
block to ensure that only valid parameters are provided to the endpoint.
To enable persistent runspaces, you will need to configure an for your API. Set the -PersistentRunspace
parameter to enable this feature. This is configured in the environments.ps1
script.
C# APIs are enabled as a .
Modal component for Universal Apps.
Modals inform users about a task and can contain critical information, require decisions, or involve multiple tasks.
Full width modals take up the full width as defined by the -MaxWidth
parameter.
Persistent modals do not close when you click off of them. You will have to close it with Hide-UDModal
.
You can use the Hide-UDModal
button to hide a modal that is currently show.
You can style modules using the -Style
, -HeaderStyle
, -ContentStyle
and -FooterStyle
parameters. Style is applied to the entire modal itself and the individual section styles are only applied to those sections. The value for these parameters are hashtables of CSS values.
Floating action button component for Universal Apps
A floating action button (FAB) performs the primary, or most common, action on a screen.
A floating action button appears in front of all screen content, typically as a circular shape with an icon in its center. FABs come in two types: regular, and extended.
Only use a FAB if it is the most suitable way to present a screen’s primary action.
Only one floating action button is recommended per screen to represent the most common action.
Date Picker component for Universal Apps
Date pickers pickers provide a simple way to select a single value from a pre-determined set.
Date pickers can be used in Forms and Steppers.
The OnChange event handler is called when the date changes. You can access the current date by using the $Body
variable.
You can customize how the date picker is shown. The default is the inline
variant that displays the date picker popup in line with the input control. The static
variant displays the date picker without having to click anything.
To set the locate of the date picker, specify the -Locale
parameter.
By default, the user can select any date. To specify minimum and maximum dates, using the -Minimum
and -Maximum
parameters.
You can limit which portions of the date picker are included by using the -Views
parameter. For example, if you wanted to remove the year selector and limit to the current year, you could do the following.
A text editor component for Universal Apps.
The editor component is based on Editor.js. It's a block editor that accepts text, links, lists, code and images.
When working with the editor, you can receive data about the current document via the OnChange
parameter. By default, data is returned in the Editor.js JSON format.
To create a basic editor, use the New-UDEditor
cmdlet.
The editor will be available and you can add new blocks by clicking the plus button.
If you define a script block for the -OnChange
event handler. The $EventData
variable will contain the current status of the editor. By default, this returns the Editor.JS JSON block format.
You can also use the HTML render plugin by specifying the -Format
parameter.
To specify the default data for the editor, use the -Data
parameter. You need to specify the JSON block format.
In order to support images, you will need to provide a published folder in which to upload the images. Once a published folder is defined, images can be uploaded directly in the editor. They will be placed within the directory and then served through the request path.
Id
string
The ID of this component.
Data
Hashtable
The Editor.JS data for this component
OnChange
ScriptBlock
The script block event handler to call when the editor data changes.
Format
string
Whether to return either json or html in the OnChange script block.
Form component for Universal Apps
Forms provide a way to collect data from users.
Forms can include any type of control you want. This allows you to customize the look and feel and use any input controls.
Data entered via the input controls will be sent back to the the OnSubmit
script block when the form is submitted. Within the OnSubmit
event handler, you will access to the $EventData
variable that will contain properties for each of the fields in the form.
For example, if you have two fields, you will have two properties on $EventData
.
The following input controls automatically integrate with a form. The values that are set within these controls will be sent during validation and in the OnSubmit
event handler.
Simple forms can use inputs like text boxes and checkboxes.
Since forms can use any component, you can use standard formatting components within the form.
When a form is submitted, you can optionally return another component to replace the form on the page. You can return any Universal Dashboard component. All you need to do is ensure that the component is written to the pipeline within the OnSubmit
event handler.
Form validation can be accomplished by using the OnValidate script block parameter.
You can define an -OnCancel
event handler to invoke when the cancel button is pressed. This can be used to take actions like close a modal.
Although you can return components directly from a form, you may want to retain the form so users can input data again. To do so, you can use Set-UDElement
and a placeholder element that you can set the content to.
In this example, we have an empty form that, when submitted, will update the results
element with a UDCard.
Instead of defining all the layout and logic for forms using cmdlets, you can also define a form based on a hashtable of schema. This version of forms is based on react-jsonschema-form.
You define fields that accept string, number, integer, enum and boolean types. This changes the type of input shown.
You can use the required
property to set a list of required properties.
Note that the properties need to be lower case! For example, you need to ensure the keys in your properties hashtable are lower case and the list of required properties are also lower case.
You can use the schemaUI
property to modify the ordering of the fields.
You can create forms that accept 0 to many objects. The user will be able to add and remove objects to the form.
You can automatically generate forms based on scripts in your PowerShell Universal environment. Script forms will generate input components based on the param
block. Script forms automatically support progress and feedback.
Script forms also support displaying the output as text or a table.
Table component for Universal Apps
Tables display sets of data. They can be fully customized.
Tables display information in a way that’s easy to scan, so that users can look for patterns and insights. They can be embedded in primary content, such as cards.
A simple example with no frills. Table columns are defined from the data.
Define custom columns for your table.
Define column rendering. Sorting and exporting still work for the table.
Column width can be defined using the -Width
parameter. You can also decide to truncate columns that extend past that width.
You can configure custom filters per column. The table supports text
, select
, fuzzy
, slider
, range
, date
, number
, and autocomplete
filters.
When using server-side processing, the available filters may not display the full range of options since the select dropdown only has access to the current page of results. To avoid this, you can use the -Options
parameter on New-UDTableColumn
.
To enable search, use the -ShowSearch
parameter on New-UDTable
.
When using custom columns, you will need to add the -IncludeInSearch
parameter to the columns you'd like to include in the search.
Process data on the server so you can perform paging, filtering, sorting and searching in systems like SQL. To implement a server-side table, you will use the -LoadData
parameter. This parameter accepts a ScriptBlock
. The $EventData
variable includes information about the state of the table. You can use cmdlets to process the data based on this information.
The $EventData
object contains the following properties.
Filters
Hashtable[] @{ id = 'fieldName'
value = 'filterValue' }
A list of filter values. Each hashtable has an Id
and a Value
property.
OrderBy
Hashtable @{ field = "fieldName" }
Property name to sort by.
OrderDirection
string
asc
or desc
depending on the sort order.
Page
int
The current page (starting with 0).
PageSize
int
The selected page size.
Properties
string[]
An array of properties being shown in the table.
Search
string
A search string provided by the user.
TotalCount
int
The total number of records before filtering or paging.
You may want to allow the user to take action on the current set of displayed data. To do so, use Get-UDElement
in the input object you want to retrieve the data from and get the table by Id. Once you have the element, you can use the Data
property of the element to get an array of currently displayed rows.
By default, paging is disable and tables will grow based on how many rows of data you provide. You can enable paging by using the -ShowPagination
cmdlet (alias -Paging
). You can configure the page size using the -PageSize
cmdlet.
By default, the page size selector provides an option to show all rows. If you want to prevent users from doing this, use the -DisablePageSizeAll
cmdlet.
You can change the location of the pagination control by using the -PaginationLocation
parameter. It accepts top, bottom and both.
The page size, by default, is set to 5. Users can adjust the number of rows per page by using the Rows per page drop down. You can adjust the default page size by using the -PageSize
parameter. To adjust the values available within the Rows per page drop down, you can use an array of integers pass to the -PageSizeOptions
parameter.
To enable sorting for a table, use the -ShowSort
parameter. When you enable sorting, you will be able to click the table headers to sort the table by clicking the headers. By default, multi-sort is enabled. To multi-hold shift and click a column header.
You can control which columns can be sorted by using New-UDTableColumn
and -ShowSort
parameter.
By default, the sorting of a table has 3 states. Unsorted, ascending and descending. If you would like to disable the unsorted state, use the -DisableSortRemove
parameter of New-UDTable
.
Tables support selection of rows. You can create an event handler for the OnRowSelected
parameter to receive when a new row is selected or unselected or you can use Get-UDElement
to retrieve the current set of selected rows.
The following example creates a table with row selection enabled. A toast is show when clicking the row or when clicking the GET Rows button.
The $EventData
variable for the -OnRowSelected
event will include all the columns as properties and a selected property as to whether the row was selected or unselected.
For example, the service table data would look like this.
When using selection and -LoadData
, the -OnRowSelected $EventData
will be the IDs of the rows and not the entire row data. It will still indicate where the row has been selected or de-selected.
You can include additional information within the table by using the -OnRowExpand
parameter of New-UDTable
. It accepts a ScriptBlock that you can use to return additional components.
Tables support exporting the data within the table. You can export as CSV, XLSX, JSON or PDF. You can define which columns to include in an export and choose to export just the current page or all the data within the table.
Hidden columns allow you to include data that is not displayed in the table but is included in the exported data.
The following hides the StartType column from the user but includes it in the export.
You can control the export functionality with a PowerShell script block. This is useful when exporting from server-side sources like SQL server tables.
In this example, I have a SQL table that contains podcasts. When exporting, you will receive information about the current state of the table to allow you to customize what data is exported.
You can decide which export options to present to your users using the -ExportOption
cmdlet. The following example would only show the CSV export option.
You can use the -TextOption
parameter along with the New-UDTableTextOption
cmdlet to set text fields within the table.
You can externally refresh a table by putting the table within a dynamic region and using Sync-UDElement
.
This example creates a button to refresh the table.
If you use the -LoadData
parameter, you can sync the table directly. This has the benefit of maintaining the table state, such as the page and filtering, after the refresh.
You can use the -ShowRefresh
parameter to provide a refresh button for server-side tables.
You can use a theme to create a table with alternating row colors.
Use the -OnRowStyle
parameter to style the rows based on the row content. Return a hashtable with CSS styles for the row.
Select component for Universal Apps
Select components are used for collecting user provided information from a list of options.
Create a simple select with multiple options.
Create a select with groups of selections.
Execute a PowerShell event handler when the value of the select is changed. $EventData[0] for the single item that was selected.
Execute a PowerShell event handler when the more than one value of the select is changed. $EventData is an array of the selected items.
Retrieve the value of the select from another component.
Chip component for Universal Apps.
Chips are compact elements that represent an input, attribute, or action.
Chips allow users to enter information, make selections, filter content, or trigger actions.
While included here as a standalone component, the most common use will be in some form of input, so some of the behavior demonstrated here is not shown in context.
Shows a toast when the chip is clicked.
Information about Universal App pages.
An app can consist of one or more pages. A page can have a particular name and URL. You can define a URL that accepts one or more variables in the URL to define a dynamic page.
Within the app editor, click the Create App Page button.
Once the page has been created, it will be listed in the pages tab.
To update the content of a page, click the Edit Code button.
As an example, you could add a button to your page.
Once you have added the controls you would like to the page, you can add it to your app. To reference the page in your app, use Get-UDPage
.
A basic page can be defined using the New-UDPage
cmdlet. You could navigate to this page by visiting the /app
URL of your dashboard.
Apps can have multiple pages, and those pages can be defined by passing an array of UDPages to New-UDApp
You may want to organize your app into multiple PS1 files. You can do this using pages.
A page can have a custom URL by using the -Url
parameter. You could navigate to this page by visiting the /db
URL of your app.
You can define a page with variables in the URL to create pages that adapt based on that URL.
Query string parameters are passed to pages and other endpoints as a hashtable variable called $Query
.
For example, if you visited a page with the following query string parameter: http://localhost:5000/dashboard/Page1?test=123
You would have access to this value using the following syntax:
You can prevent users from accessing pages based on their role by using the -Role
parameter of pages. You can configure roles and role policies on the Security page.
The following options are available for customizing the header.
Use the -HeaderPosition
parameter to adjust the behavior of the header.
absolute\fixed - Remains at the top of the page, even when scrolling
relative - Remains at the top of the page. Not visible when scrolling.
You can adjust the colors of the header by specifying the -HeaderColor
and -HeaderBackgroundColor
parameters. These colors will override the theme colors.
You can customize the navigation of a page using the -Navigation
and -NavigationLayout
parameters. Navigation is defined using the List component. Navigation layouts are either permanent or temporary.
Custom navigation can be defined with a list. List items can include children to create drop down sections in the navigation.
Dynamic navigation can be used to execute scripts during page load to determine which navigation components to show based on variables like the user, IP address or roles.
You can generate dynamic navigation by using the -LoadNavigation
parameter. The value of the parameter should be a script block to execute when loading the navigation.
You can use dynamic navigation to create a navigation menu that takes advantage of roles. Use Protect-UDSection
to limit who has access to particular menu items. Ensure that you also include the same role on the page.
The permanent layout creates a static navigation drawer on the left hand side of the page. It cannot be hidden by the user.
The temporary layout creates a navigation drawer that can be opened using a hamburger menu found in the top left corner. This is the default setting.
You can use New-UDAppBar
with a blank page to create horizontal navigation.
You can display a logo in the navigation bar by using the -Logo
parameter.
First, setup a published folder to host your logo.
Now, when creating your page, you can specify the path to the logo.
The logo will display in the top left corner.
To customize the style of your logo, you can use a cascading style sheet and target the ud-logo
element ID.
You can define custom content to include in the header by using the -HeaderContent
parameter.
Page titles are static by default, but you can override this behavior by using -LoadTitle
. It will be called when the page is loaded. This is useful when defining pages in multilingual dashboards.
Static pages allow for better performance by not executing PowerShell to load the content of the page. This can be useful when displaying data that does not require dynamic PowerShell execution. The page content is constructed when the dashboard is started.
Static pages do not have access to user specific data. This includes variables such as:
$Headers
$User
$Roles
You can still include dynamic regions within pages. These dynamic regions will have access to user data. Reloading the below example will update the date and time listed in the page.
Slider component for Universal Apps.
Sliders allow users to make selections from a range of values.
Sliders reflect a range of values along a bar, from which users may select a single value. They are ideal for adjusting settings such as volume, brightness, or applying image filters.
Textbox component for Universal Apps
A textbox lets users enter and edit text.
A password textbox will mask the input.
You can create a multiline textbox by using the -Multiline
parameter. Pressing enter will add a new line. You can define the number of rows and the max number of rows using -Rows
and -RowsMax
.
You can use Get-UDElement
to get the value of a textbox
You can set the icon of a textbox by using the -Icon
parameter and the New-UDIcon
cmdlet.
The -OnEnter
event handler is executed when the user presses enter in the text field. It is useful for performing other actions, like clicking a button, on enter.
The -OnBlur
event handler is executed when the textbox loses focus.
Use the -OnValidate
event handler to validate input typed in the textbox.
Switch component for Universal Apps
Switches toggle the state of a single setting on or off.
Switches are the preferred way to adjust settings on mobile. The option that the switch controls, as well as the state it’s in, should be made clear from the corresponding inline label.
Create a basic switch.
Respond to when a switch value is changed. The $EventData
variable will include whether or not the switch was checked or unchecked.
You can retrieve the value of the switch within another component by using Get-UDElement
. Use the Checked property to determine whether the switch is checked out not.
A transfer list (or "shuttle") enables the user to move one or more list items between lists.
Component for uploading files in Universal Apps.
The UDUpload component is used to upload files to Universal Apps. You can process files the user uploads. You will receive the data for the file, a file name and the type of file if it can be determined by the web browser.
Upload ony supports files up to 2 GB in size.
Uploads a file and shows the contents via a toast.
The body of the OnUpload
script block is a JSON string with the following format.
The $EventData
is an object with the following structure.
Uploads a file as part of a UDForm.
The body of the OnSubmit
script block is the same one you will see with any form and the file will be contains as one of the fields within the form.
This example allows a user to upload a file. Once the file is uploaded, it will be saved to the temporary directory.
This component works with and .
Link component for Universal Apps.
Create a hyper link in a dashboard.
Create a basic link that goes to a web page.
Adjust the underline and text style.
Open the link a new window when clicked.
Execute a PowerShell script block when the link is clicked.
Radio component for Universal Apps
Radio buttons allow the user to select one option from a set.
Use radio buttons when the user needs to see all available options. If available options can be collapsed, consider using a dropdown menu because it uses less space.
Radio buttons should have the most commonly used option selected by default.
An event handler that is called when the radio group is changed. the $Body variable will contain the current value.
Set the default value of the radio group.
You can use custom formatting within the radio group. The below example will place the radio buttons next to each other instead of on top of each other.
Drag and drop layout designer.
The Grid Layout component is useful for defining layouts in a visual manner. You can drag and drop components using the web interface to automatically define the layout as JSON.
You can employ the -Design parameter to configure the layout of yourr page. This allows dynamic drag and drop of components that you place within the content of the grid layout. As you drag and resize components, the layout will be copied to your clipboard. Note: All components must possess a statid -Id
Once you have configured the layout to fit your needs, you can paste the JSON into your script and assign it with the -Layout parameter. Remove the -Design parameter to lock elements in place.
You can allow your users to dynamically modify layouts by using the -Draggable, -Resizable and -Persist parameters. The layout changes are stored locally so the next time each user visits a page, it will be loaded with their chosen layout.
Grid layout component for Universal Apps.
The responsive layout grid adapts to screen size and orientation, ensuring consistency across layouts.
The grid creates visual consistency between layouts while allowing flexibility across a wide variety of designs. Material Design’s responsive UI is based on a 12-column grid layout.
Adjust the spacing between items in the grid
You can also use the New-UDRow
and New-UDColumn
functions when working with the grid.
When working with columns, you will need to specify the medium and large sizes, otherwise they will always be set to 12.
Stepper component for Universal Apps
Steppers convey progress through numbered steps. It provides a wizard-like workflow.
Steppers display progress through a sequence of logical and numbered steps. They may also be used for navigation. Steppers may display a transient feedback message after a step is saved. The stepper supports storing input data in the stepper context. It supports the following controls.
The $Body variable will contain a JSON string that contains the current state of the stepper. You will receive information about the fields that have been defined within the stepper and info about the current step that has been completed. The $Body JSON string will have the following format.
You can validate a step in a stepper by specifying the OnValidateStep
parameter. The script block will receive a $Body variable with JSON that provides information about the current state of the stepper. You will need to return a validation result using New-UDValidationResult
to specify whether the current step state is valid.
The JSON payload will have the following format. Note that steps are 0 indexed. If you want to validate the first step, check to make sure the step is 0.
You will have to convert the JSON string to an object to work with in PowerShell and then return the validation result.
You can direct the user to a particular step in the OnValidateStep
event handler. Use the New-UDValidationResult
-ActiveStep
parameter to move the user to any step after clicking next. Step indices are 0 based.
This example moves the user to the last step after completing the first step.
You can disable the previous button by using the -DisablePrevious
parameter of New-UDValidationResult
.
This example disables the previous step whenever the user moves forward in the stepper.
You can create a vertical stepper by setting the -Orientation
parameter to vertical.
New-UDMenu component for Universal Apps.
The menu component can be used to provide a drop down list of options for the user to select.
Create a basic menu.
You can edit the style of the menu by adjusting the variant parameter.
You can use the value parameter to define a value that differs from the text displayed.
Use the -OnChange
parameter to specify a script block to call when a new value is selected. The value of the selected item will be available in $EventData
.
Tab component for Universal Apps
Tabs make it easy to explore and switch between different views.
Tabs organize and allow navigation between groups of content that are related and at the same level of hierarchy.
Dynamic tabs will refresh their content when they are selected. You will need to include the -RenderOnActive
parameter to prevent all the tabs from rendering even if they are not shown.
Quickly and responsively toggle the visibility value of components and more with the hidden utilities.
Hidden works with a range of breakpoints e.g. xsUp
or mdDown
, or one or more breakpoints e.g. -Only 'sm'
or -Only @('md', 'xl')
. Ranges and individual breakpoints can be used simultaneously to achieve very customized behavior. The ranges are inclusive of the specified breakpoints.
Using any breakpoint -Up
parameter, the given children will be hidden at or above the breakpoint.
Using any breakpoint -Down
parameter, the given children will be hidden at or below the breakpoint.
Using the breakpoint -Only
parameter, the given children will be hidden at the specified breakpoint(s).
The -Only
parameter can be used in two ways:
list a single breakpoint
list an array of breakpoints
Dynamic regions allow you control the reload of data within the region.
New-UDDynamic
allows you to define a dynamic region. Pages themselves are dynamic in nature. This means that every time a page is loaded, it runs the PowerShell for that page. Sometimes, you may want to reload a section of a page rather than the whole page itself. This is when you will want to use dynamic regions.
This dynamic region reloads when the button is clicked.
An array of arguments may be passed to the dynamic region.
Dynamic regions enable the ability to auto refresh components after a certain amount of time. The entire region's script block will be run when autorefreshing.
Sometimes refreshing a dynamic component may take some time. For example, if you are querying another service's REST API or a data. Dynamic regions support configuration of the component that shows when the region is reloading. By default, nothing is shown. This can be any app component.
Protect sections based on roles.
The Protect-UDSection
cmdlet hides it's content if a user does not have the specified roles.
Error boundary component for apps.
The New-UDErrorBoundary
component is used for isolating portions of a dashboard to contain components that may throw an error. Many app components use the error boundary component internally.
If you'd like to isolate a portion of your app to prevent the entire page from failing to load, you can use the following syntax.
If any error is thrown from the content, you will see an error such as thing.
Information about UDElements.
The New-UDElement
cmdlet allows you to create custom React elements within your app. Similar to New-UDHtml
, you can define HTML elements using New-UDElement
. Unlike, New-UDHtml
, you can update elements, set auto refresh and take advantage of the React component system.
You need to specify the -Tag
and -Content
when creating an element. The below example creates a div tag.
You can nest components within each other to create HTML structures. For example, you could create an unordered list with the following example.
You can select attributes of an element (like HTML attributes) by using the -Attributes
parameter. This parameter accepts a hashtable of attribute name and values. The below example creates red text.
You can wrap any component with New-UDElement and add an event handler.
You can define the -AutoRefresh
, -RefreshInterval
and -Endpoint
parameters to create an element the refreshes on a certain interval. The below example creates an element that refreshes every second and displays the current time.
You can use the Set-UDElement
cmdlet to set element properties and content dynamically. The following example sets the content of the element to the current time.
You can also set attributes by using the -Properties
parameter of Set-UDElement
. The following example sets the current time and changes the color to red.
You can add child elements using Add-UDElement
. The following example adds child list items to an unordered list.
You can clear the child elements of an element by using Clear-UDElement
. The following example clears all the list items from an unordered list.
You can force an element to reload using Sync-UDElement
. The following example causes the div to reload with the current date.
You can remove an element by using Remove-UDElement
.
Create a color picker with an OnChange event handler using New-UDElement.
Define static HTML using Univeral apps.
You can define static HTML using New-UDHtml
. This cmdlet does not create React components but rather allows you to define static HTML. Any valid HTML string is supported.
The following creates an unordered list.
Expansion Panel component for Universal Apps
Expansion panels contain creation flows and allow lightweight editing of an element.
An expansion panel is a lightweight container that may either stand alone or be connected to a larger surface, such as a card.
Transition component for Universal Apps.
Transitions allow you to transition components in and out of view within your dashboard using various animations. You can take advantage of interactive cmdlets like Set-UDElement
to change the transition state and cause an element to move in.
In the following example, we have a card that transitions in via a Fade. Clicking the switch the toggle the card in and out.
The resulting effect looks like this.
The collapse transition will collapse a section in and out. You can specify a collapse height to only collapse a portion of the section.
A fade transition fades a component in and out as seen in the previous example. You can configure the timeout value to specify the number of seconds it takes to complete the transition.
The slide transition moves a component into position. You can determine the position of the slide by specifying the -SlideDirection
parameter.
The grow transition will fade and grow a component into place.
The zoom transition will zoom a component into place.
Paper component for Universal Apps
In Material Design, the physical properties of paper are translated to the screen.
The background of an application resembles the flat, opaque texture of a sheet of paper, and an application’s behavior mimics paper’s ability to be re-sized, shuffled, and bound together in multiple sheets.
By default, the paper component uses the flex display type for content within the paper. This can cause issues with other types of content that may be stored within the paper. You can override the display type by using the -Style
parameter.
By default, paper will have rounded edges. You can reduce the rounding by using a square paper.
The -Style
parameter can be used to color paper. Any valid CSS can be included in the hashtable for a style.
The following example creates paper with a red background.
Card component for Universal Apps
Cards contain content and actions about a single subject.
Cards are surfaces that display content and actions on a single topic. They should be easy to scan for relevant and actionable information. Elements, like text and images, should be placed on them in a way that clearly indicates hierarchy.
Although cards can support multiple actions, UI controls, and an overflow menu, use restraint and remember that cards are entry points to more complex and detailed information.
You can use the body, header, footer and expand cmdlets to create advanced cards. The below example creates a card with various features based on a Hyper-V VM.
Build custom components.
Components in PowerShell Universal apps are exposed as functions. You can combine built in components to produce your own custom components.
The below example creates a New-UDPeoplePicker
component from existing app components. You can use the New-UDPeoplePicker
component in your apps. This function can either be defined within your app directly or within a Module.
This example users a published folder of avatars.
AppBar component for Universal Apps
The App Bar displays information and actions relating to the current screen.
The top App Bar provides content and actions related to the current screen. It's used for branding, screen titles, navigation, and actions.
To create an app bar that is pinned to the bottom of the page, you can use the -Footer
parameter.
A relative footer always stays at the bottom of the document. If the contents of the page do not take up 100% of the screen height, the footer will be positioned at the bottom of the view. If the content is greater than 100% of the screen height, the footer will only be visible when scrolled to th bottom of the correct.
A fixed AppBar will show even when the screen is scrolled. It will remain stuck to the top. This example creates an AppBar that is fixed with a div that is 10000 pixels high.