How to use your MS Teams as an email distribution list

When you create a Microsoft Team, a Microsoft 365 group is created to manage the team membership like Owners, members, guests. I would rather say the Microsoft 365 group is a backbone of a Team. Through the group you also get an email address for the MS team. Find the other Microsoft 365 services which gets created per this documentation whenever there is a Team provisioned

On this blogpost let us see how to enable a team which can also act as an email distribution list so that you can send an email to all the team members, by default this option is disabled. You will have to be an Owner of the team to set this up. There are couple of ways to do this

  • Graph Explorer
  • Outlook
  • Exchange Online Powershell
  • Exchange Online Administrator

Graph Explorer:

Graph explorer is a utility that will let you make requests and get responses against the different graph endpoints as a signed in user (Delegated User). To enable the email distribution functionality, we will have to get the group id of the team for setting a value to True for the property autoSubscribeNewMembers. To get the Group Id information go to the Team and click the Get link to team as shown below

Copy the content from the popup which should be in the below format

To get the group details like Email Address, Mail Nick Name, Display Name etc make a GET request to the following endpoint from the explorer

https://graph.microsoft.com/v1.0/groups/groupId

Make a PATCH request to the endpoint https://graph.microsoft.com/v1.0/groups/groupId with the payload

{
“autoSubscribeNewMembers”:true
}

Now make a GET request on the following endpoint with the group id of the team https://graph.microsoft.com/v1.0/groups/groupId?$select=autoSubscribeNewMembers

to get its status. It is all set now.

Outlook:

The Microsoft 365 group inbox for a Team is not available in Outlook but it can be accessed through the SharePoint site associated to the group. Open the SharePoint site from any of the Teams channel as shown below

Click Conversations on the left navigation

The URL of the Outlook will be in following format: https://outlook.office365.com/mail/group/domain/mailNickName/email

Access the settings of the group

Click Edit group from the Group Settings

On the Group Settings popup, enable the Subscription as shown below and then Save it. By default this setting is disabled for the Microsoft 365 group.

Exchange Online PowerShell:

The same setting can also be enabled from Exchange online PowerShell if you have Exchange online Administrator access on the tenant. Make sure the Exchange online PowerShell module is installed. Follow the steps below to turn on AutoSubscribeNewMembers which distributes emails to all users

  1. Load the module by the running the command Import-Module ExchangeOnlineManagement
  2. Connect to the Exchange online PowerShell in Microsoft 365
    1. Connect-ExchangeOnline -UserPrincipalName userId@domain.com -ShowProgress $true
  3. Set-UnifiedGroup -Identity 539818c4-XXXX-XXXX-b781-78dff1762b72 -AutoSubscribeNewMembers or Set-UnifiedGroup -Identity “Team Display Name” -AutoSubscribeNewMembers
  4. To disable the setting: Set-UnifiedGroup -Identity ” Team Display Name ” -AutoSubscribeNewMembers:$false

Refer to the documentation from Microsoft for more Exchange online commands related to the Microsoft 365 group.

Exchange Online Administrator

Login into the Exchange Online Admin center and click on Groups from the dashboard section. Execute the below steps

  1. Find the group associated to the team (Team Display Name) from the list and then select
  2. Click on Edit (Pencil Icon) from the ribbon
  3. On the General tab, Enable the property Subscribe new members and then Save

Summary: The same setting can also be applied to a Team created through a Microsoft 365 group. Hope you have found this informational. There were already lot of blogs talking about groups

Reference:

https://support.microsoft.com/en-us/office/learn-about-microsoft-365-groups-b565caa1-5c40-40ef-9915-60fdb2d97fa2

https://support.microsoft.com/en-us/office/follow-a-group-in-outlook-e147fc19-f548-4cd2-834f-80c6235b7c36#ID0EAACAAA=Web

https://sharegate.com/blog/office-365-groups-explained

https://www.jumpto365.com/blog/everyday-guide-to-office-365-groups

How to setup custom domain and email address in Microsoft 365 online tenant

On this blogpost let us see how to add a custom domain and configure exchange email address for the added domain in a Microsoft 365 tenant. This will allow you to create M365 identities for the users in the Microsoft 365 tenant like user@domain.com instead of user@domain.onmicrosoft.com. This setup is also required if you have a Hybrid setup with users from Onpremise Active directory. Azure AD connect tool helps you synchronize your AD identity from Onpremise to Azure AD or Microsoft 365 tenant directory only if there is a custom domain added to the directory. The custom domain can be added from Microsoft 365 tenant admin center or Azure Active directory portal associated to the M365 tenant.  

Pre-Requisites:

  • Own a Domain from any domain providers
  • Global administrator of Microsoft 365 tenant

If you don’t add a domain, user account in your organization will use the default onmicrosoft.com domain for their email address and UPN. To setup and configure a custom domain, you will have to

  1. Add a TXT or MX record
  2. Add DNS records to connect Microsoft 365 services

For this blog post I have used Domain.com provider to add the DNS records for the custom domain

Add a TXT or MX record:

The first step is to prove you are Owner of the domain and also make the domain is not associated to different tenant. To generate the DNS record values and to add the custom domain login to the Microsoft 365 Admin Center

  1. Select Show all > Settings > Domains
  2. Click Add domain
  3. Enter the custom domain name you own
  4. Click on the button Use this domain

Select Add a TXT record to the domain’s DNS records but you can also add a MX record or add a text file to the domain’s website. Find the different options

  1. The DNS record values for the TXT record will be generated as shown below. TTL 3600 seconds is 1 hour
  1. Add the DNS record for TXT from the domain provider interface for managing the records
  1. Go back to the Admin center and then click Verify. It takes around 15 mins to an hour for the DNS records to propagate, sometimes it may even take more time. Keep trying till the domain in verified. Once the domain is verified you will be able to proceed to the next step for configuring the Microsoft 365 services like exchange etc. You can also Skip and do the configuration later but with this setup you can create user accounts by using the custom domain as its UPN e.x user@domain.com without email address. Find instructions on this link to add a custom domain from Azure Active directory portal.

Add DNS records to connect Microsoft 365 services:

The domain is added & verified, now its time to connect the Microsoft services like Email (Exchange Online, Outlook), Mobile device Management aka MDM with the custom domain. On this post will be connecting only to Exchange online to receive email through Microsoft 365. After this setup is done Exchange online will be your new email host for the domain. After the domain is verified from the step above, select Add your own DNS records and click Continue button as shown below

The following DNS records will be generated as shown below

  • MX Records (Mandatory)
    • Sends incoming mail for your domain to the Exchange Online service in Office 365. Mails are delivered to the mail exchange server with the lowest preference number for this record, typically.
  • CNAME Records (Optional: For Outlook client to work)
    • Helps Outlook clients to easily connect to the Exchange Online service by using the Autodiscover service. Autodiscover automatically finds the correct Exchange Server host and configures Outlook for users
  • TXT Records (Optional: SPF record for prevention of spamming)
    • Helps to prevent other people from using your domain to send spam or other malicious email. Sender policy framework (SPF) records work by identifying the servers that are authorized to send email from your domain

Go back to the domain hosting provider interface to add the above DNS records, to get the values for each record expand each record shown on the above interface.

MX Record:

Set the priority to the Highest or to the number 0 and then add the DNS record. If the domain is xyz.com

Sample value/Content: xyz-com.mail.protection.outlook.com

CNAME Records:

Name: Autodiscover

Value/content: autodiscover.outlook.com

TTL: 1 hour

TXT Records (SPF):

There can be only one SPF record on the DNS records so if there are another record already (default), refer this link for more information. I already had the default one so the valye for the TXT record looked like v=spf1 ip5:XX.XX.XXX.X/XX include:spf.protection.outlook.com -all

ipX:XX.XX.XXX.X/XX is the default one

Now after all the DNS records are added, choose Continue. This will take you to the last page of the wizard with the message Domain setup is complete

Now the setup is completed, you can create users using the new custom domain or change an existing users UPN and email address on Admin center with the following steps

  1. Go to Users > Active users page
  2. Select the user’s name, and then on the Account tab select Manage username.
  3. On the Aliases box, enter the new alias@yourdomain.com and then click Add
  4. Select the new alias and if required change it to the primary email.

Summary: On this post we have seen how to configure a custom domain with email. There can also be multiple domains in one tenant. Hope you have found this informational. Let me know any feedback or comments on the comment section below

Reference:

https://docs.microsoft.com/en-us/microsoft-365/admin/get-help-with-domains/create-dns-records-at-any-dns-hosting-provider

https://support.microsoft.com/en-us/office/connect-your-domain-to-office-365-cd74b4fa-6d34-4669-9937-ed178ac84515

https://docs.microsoft.com/en-us/microsoft-365/admin/setup/add-domain

https://support.microsoft.com/en-us/office/add-a-new-domain-in-microsoft-office-365-285437c3-d6c9-45cd-8b48-ed29c670c796

https://docs.microsoft.com/en-us/microsoft-365/admin/setup/domains-faq?view=o365-worldwide

https://docs.microsoft.com/en-us/microsoft-365/enterprise/external-domain-name-system-records

https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/set-up-spf-in-office-365-to-help-prevent-spoofing?view=o365-worldwide

Teams Presence Light with Raspberry Pi

Almost every one of us is working from home these days due to the Corono situation we are in as of the time I am writing this article. I wrote a recent blog post about controlling devices from PowerApps with the help of a Raspberry PI and thought of extending the project by creating a Teams presence light with the help of a Raspberry Pi & couple of LED’s with different colours. This was possible due to the availability of the presence API endpoint in MS Graph, this helps us get the user’s current team presence (Available, Busy, Be right back, Do not disturb etc) for a logged in user. As of the time I am writing this article application permissions are not supported.

Device Code Flow:

The supported permission type is Delegated to get the presence information in MS graph, the user must sign in to get the users Teams presence. So how can a user signin/authenticate on a device like Raspberry Pi if we are only using a Terminal window to develop & run the application which I will doing it here, Device code flow to the rescue which is an authentication flow to get the data from MS graph for handling delegated permissions with remote signin/authentication using an auto generated device code. This flow lets the user use another device (for instance the windows client with the VS Code) to sign-in interactively. By using the device code flow, the application obtains tokens through a two-step process especially designed for these devices like Raspberry Pi. Examples of such applications are applications running on iOT, or Command-Line tools (CLI).

Refer this blog post for the steps & instructions to develop applications remotely on a Raspberry Pi using VS code.

Application Design:

There will be a .NET core console application polling the MS Graph presence endpoint every 5 seconds and based on the status, the corresponding coloured lights will be turned on. Find below the high-level design of the application

Active Directory application registration:

Start with registering an Application in Active directory with the following settings

Supported Account Types: Accounts in any organizational directory

Redirect URI (Public client/native): https://login.microsoftonline.com/common/oauth2/nativeclient

Enable Allow public client flows a required setting for the device code flow to work as shown below

Add the permission Presence.Read.All if you going to create a presence light for some other user other than the signed in user and Presence.Read if its going to be only for the signed in user. Once the permission is added, grant admin consent.

Console Application:

A console application with the following packages

Package NamePurpose
System.Devices.GpioTo control the GPIO pins for turning on different coloured lights
Microsoft.Identity.ClientAuthentication library for .NET console app facilitating MS graph token handling, caching, token expiration etc
System.ThreadingTimer to poll MS graph presence endpoint every 5 seconds
Newtonsoft.JsonTo parse the MS Graph presence endpoint response
System.Net.HttpTo make a HTTP request GET for presence endpoint

If you wanted to try the MS Graph presence endpoint go the Graph explorer and sign-in using your work account linked to your teams

Beta endpoint URL: https://graph.microsoft.com/beta/me/presence

Request Type: GET

In this example, GPIO pins 12 and 13 are used with Green and Red LED’s but you can also use a RGB LED matrix supported for Raspberry Pi which are readily available in the market. Use the Client Id and the tenant id of the application in the app.

  • Provide the GPIO pins the root permissions through the command on the terminal window /usr/bin/gpio export 12 out and /usr/bin/gpio export 13 out.
  • Run the application by using dotnet run
  • Method AcquireByDeviceCodeAsync(IPublicClientApplication pca) generates the device code
  • As soon as the application is run from the command line, the code is generated as shown below
  • Use the URL https://microsoft.com/devicelogin to login and authenticate against the code generated above
  • Code pca.AcquireTokenSilent(Scopes, accounts.FirstOrDefault()).ExecuteAsync(); generates the token which will used along with the graph GET request for getting the teams presence status of the user
  • The Token will be valid only for 3599 seconds which is close to 1 hour. Generate one more token using the same line of code after an hour which I have not handled in the sample code.
  • Polling will happen every 5 seconds using the .NET Timer_timer.Change(TimeSpan.FromSeconds(0), TimeSpan.FromSeconds(5));
  • Based on the teams presence, the corresponding lights will be turned on using the below code
switch (presenceStatus)
{
                        case "Available":
                            Console.WriteLine($"{DateTime.Now} : User is Available");
                            controller.Write(pinGreen, PinValue.High);
                            controller.Write(pinRed, PinValue.Low);
                            break;
                        case "Busy":
                            Console.WriteLine($"{DateTime.Now} : User is Busy");
                            controller.Write(pinGreen, PinValue.Low);
                            controller.Write(pinRed, PinValue.High);
                            break;
}

Code for this application can be found in this GitHub repo link.

More Information about the Device code Flow:

 A POST request to the URL https://login.microsoftonline.com/yourTenantID/oauth2/devicecode with the following information for Header:

Content-Type: application/x-www-form-urlencoded

Request Body: resource=https%3A%2F%2Fgraph.windows.net&client_id=ADClientId/Appid

Will generate the following response

Login & authenticate using the URL https://microsoft.com/devicelogin with the work account.

Token Generation:

With the information from the above request, the token can be generated with a POST request to the URL https://login.microsoftonline.com/youtTenantID/oauth2/token with the following information for Header:

Content-Type: application/x-www-form-urlencoded

Request Body: grant_type=device_code&resource=https%3A%2F%2Fgraph.windows.net&code=CAQABAAEAAAB2UyzwtQEKR7-rWbgdcBZIsC_ydGuxXqxKTcIvapYfPR0edvvCOBAW4VoOZgLHdaAgrf0cBy-5s9Szoez1NmqIgoe0Ggs9p_7-vVilrU6r9CFom5N_M(Information from the Previous response)&client_id= ADClientId/Appid

Will generate the token in the response

Refresh Token:

This token is used to generate access token after the initial one expires by making another request with information like this in the request body

All of these are handled for us by the Microsoft Authentication library for .NET.

Summary: I’ve used MSAL for .NET library  but there are also libraries for MSAL for Python and for other languages as well based on your comfort with the programming language. Hope you have found this informational & interesting. Let me know any feedback or comments on the comment section below

Reference:

https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-device-code

https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Device-Code-Flow

https://github.com/Azure-Samples/active-directory-dotnetcore-devicecodeflow-v2

https://ashiqf.com/2020/10/25/tools-to-call-microsoft-graph-api-endpoints-as-a-user-and-application/

Learn how to control devices from PowerApps using Raspberry Pi

I have recently purchased a Raspberry Pi 4 to explore IoT with Microsoft 365 platform. The Raspberry Pi is a low-cost credit-card sized computer which can be connected to a monitor, keyboard, mouse and to Internet via Wi-Fi or ethernet port. In addition, the Raspberry Pi has a 40 pin GPIO (General Purpose I/O) connector for us to connect sensors (Input) and to control devices (Output) through a relay. It enables people of all ages to explore computing and to learn how to program in languages like Scratch, Python, .NET core etc. One of the most popular operating system for the Raspberry Pi is Raspbian which is also the official one but there are also other operating systems like Ubuntu, Windows 10 IoT core (Not supported for Raspberry Pi 4). On this blog post, I will cover the different components used to integrate Raspberry Pi with Microsoft 365 service PowerApps & Azure services like Azure functions & Azure IoT hub to control devices. Find below the design and the different components used

  1. Environment Setup
    1. Raspberry Pi setup for IoT with .NET Core
    2. Visual Studio Code setup for remote development
  2. Azure IoT hub
  3. .NET Core Console Application
  4. Azure Function – HTTP Trigger
  5. Power Apps
    1. Custom Connector
    2. Canvas Apps

Environment Setup:

Raspbian OS is based on the Debain Operating System which has been optimized for Raspberry Pi hardware and it is the official one. You can find here some instructional videos on the following link to the install the OS on your Raspberry Pi

https://www.raspberrypi.org/help/noobs-setup/

If you have ordered a Raspberry Pi with a starter kit, most of the sellers would have loaded the Raspbian OS image on the SD card as a part of the kit. Once the OS is installed & configured, it is ready for use with the default username pi and the password is raspberry. Find below the schematic and the GPIO pin out diagram. On the sample code I have used Pin 17 and Pin 18 to control devices

Remote Tools:

There are tools to connect raspberry Pi remotely from your Windows client. Software xrdp provides a graphical interface for the users to remotely connect Raspberry Pi using Microsoft’s RDP mstsc.exe. Follow along this blogpost to set this up on the Raspberry Pi to enable remote connectivity. You can also use PuTTY a SSH tool to remotely connect with Raspberry Pi device. To know the IP address of the device, login to the Router to which the raspberry pi is connected or through the command hostname -I on the command line. To use PuTTY client & VS Code remote development plugin, SSH must be enabled on the Raspberry Pi OS. By default, it is disabled to enable follow the below steps

  1. Launch Raspberry Pi Configuration from the Preferences menu.
  2. Navigate to the Interfaces tab.
  3. Select Enabled next to SSH.
  4. Click OK

Raspberry Pi Setup for IoT with .NET Core:

.NET core is an open source development platform maintained by Microsoft and .NET community on Github. I have chosen .Net core and the programming language C# in which I am comfortable with. There are also python libraries to control GPIO pins in Raspberry Pi. To use .NET core IoT libraries install .NET core 3.1 on Raspberry Pi. Follow the instructions below to install .NET core

  1. Copy the Direct link of the .NET core SDK from the link for Linux ARM32. Based on information gathered from few blogposts .NET core is supported for ARM64 though Raspberry Pi is 64 bit. Get the latest link from https://dotnet.microsoft.com/download/dotnet-core/3.1
  2. Open a Terminal window in Raspberry Pi. Enter the following command to download the .Net core sdk binary

wget https://download.visualstudio.microsoft.com/download/pr/8a2da583-cac8-4490-bcca-2a3667d51142/6a0f7fb4b678904cdb79f3cd4d4767d5/dotnet-sdk-3.1.403-linux-arm.tar.gz

  1. Update the Raspbian OS by entering the following command

sudo apt-get update
sudo apt-get upgrade

  1. Run the following command to make the .NET SDK commands available for the terminal session

mkdir -p $HOME/dotnet && tar zxf dotnet-sdk-3.1.403-linux-arm.tar.gz -C $HOME/dotnet
export DOTNET_ROOT=$HOME/dotnet
export PATH=$PATH:$HOME/dotnet

  1. To make it available permanently on all the sessions. Run the following command to open the .profile file to save the information

sudo nano .profile

  1. Add the following lines at the end of the file by scrolling and then save it (CTRL+S) and then exit using (CTRL+X)

# set .NET Core SDK and Runtime path
export DOTNET_ROOT=$HOME/dotnet
export PATH=$PATH:$HOME/dotnet

  1. Run the command dotnet –info to know the version of the .Net core

Visual Studio Code setup for remote development:

You can develop applications remotely on a Raspberry Pi device using VS Code with the help of a plugin Remote Development which uses SSH to connect. After the plugin in installed, perform the following steps to remotely connect the Raspberry Pi device

  1. Have the IP address of the Raspberry Device ready which will be used to add a SSH host. Use the command hostname -I on the Raspberry Pi’s terminal window will reveal the IP address
  2. Go to the VS code and press CTRL+SHIFT+P together and type Remote-SSH: Connect to Host & select
  1. Click Add New SSH Host
  2. Type ssh pi@x.x.x.x -A and then press Enter. X.X.X.X is the IP address of your raspberry device and pi is the username (Default)
  3. Select the configuration file. I have used default, %USERPROFILE%\.ssh\config on Windows 10
  1. Host will be added. You are now ready to connect remotely provided the SSH is enabled on the Raspberry Pi.

Azure IoT Hub:

IoT hub is a managed service hosted in cloud that acts as a central message hub for bi-directional communication from the device to the cloud and the cloud to the device. There is a also a Free-Tier limited to one per subscription which can add up to 500 devices and 8000 msgs/day as of today based on the Pricing calculator. Go through the Microsoft documentation about IoT hub. Create a IoT Hub for us to send a message to the Raspberry Pi device for us control the device as per the instruction given in this article. After the IoT hub is created

  1. A device must be registered with your IoT hub before it can connect. There are different ways to register a device like using Azure Cloud shell, in this case we will use portal. Click IoT Devices under the Explorers blade on the IoT hub and click on + New, enter the Device ID and click save.
  1. Copy the Primary key of the registered device
  2. Copy the Hostname from the IoT Hub Overview blade
  3. These values will be used later in the .NET console application

Device Explorer:

It is a tool which helps you to manage devices by connecting to the IoT hub you have just created, it can be also done from the Azure portal, Azure Cli etc. It is very easy to connect to the IoT hub using the connection string. Download the device explorer from the https://aka.ms/aziotdevexp

To get the connection string, click Shared access policies under Settings blade and click iothubowner policy. Copy the Connection string-primary key and paste it on the Configuration section of the Device explorer and click Update as shown below

To send a message to the device click the tab Messages to Device and for registering new devices click Management.

.NET Core Console Application:

The Microsoft .NET core team also has a .NET core IoT library. The package System.Device.Gpio supports GPIO pins to control sensors (Pin Mode: Input) and devices like relay, LED’s (Pin Mode: Output). In this case we will be using the Pin no 17 & 18 to turn on or off a LED with Pin mode set as Output.

Setup for controlling devices:

To control a LED, connect a 220 ohm resistor to the long lead and the other end to a GPIO Pin (17 & 18) & the short LED lead to any one of the GPIO Ground.

In my setup I have used a Breadboard, GPIO Extension board & GPIO extension cable. GPIO Pin’s 17 & 18 are used, there are many other pins for us to use. Look at the GPIO pin schematics for more details on the pins. There are also relays designed for Raspberry Pi which helps controlling real devices, there are different relays module (4 channel, 8 channel, 10 channel etc) available in the market.

Connect Remotely to Raspberry Pi using VS code:

Connect the VS code to the Raspberry Pi using SSH by using keyboard shortcut CTRL+SHIFT+P and click Remote-SSH: Connect to Host and click the IP of the raspberry PI or the hostname based on the VS code setup (SSH Host) we have done earlier

After the password (Default: raspberry) is entered. If you see on the left bottom corner of the VS code with SSH: IP Address or host name, it is then successfully connected

In the terminal window you can enter all bash commands in context of Raspberry Pi.

VS Code Plugin:

To add a package from VS code to the Console app project install a plugin Nuget Packet manager. You can also use the CLI command on the terminal window to add the package but the plugin will help us to add different packages using the UI. All the packages will be installed on Raspberry Pi device as shown below

I also recommend you install to C# and CodeTour extensions (code walkthrough for the code I’ve used for this sample project).

Create the first Console Application to control a LED:

Follow the below steps to control a LED connected to GPIO PIN 22 using the .NET IoT package System.Device.Gpio

  1. On the VS Code terminal window, enter the command dotnet new console to create a new console application
  2. Add the package System.Device.Gpio using the nuget package manager plugin by CTRL+SHIFT+P > NuGet Package Manager: Add Package
  3. Add the following code to the Program.cs file
using System;
using System.Device.Gpio;

namespace DemoProject-GPIOControl
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("Turning on Light from Pin 22");
            using var controller = new GpioController();
            controller.OpenPin(22, PinMode.Output);
            controller.Write(22, PinValue.High);
            Console.ReadKey();

        }
    }
}

  1. Run the command on the Terminal window dotnet run. There will be an unauthorized exception as below
Unhandled exception. System.UnauthorizedAccessException: Setting a mode to a pin requires root permissions.
 ---> System.UnauthorizedAccessException: Access to the path '/sys/class/gpio/gpio17/direction' is denied.
 ---> System.IO.IOException: Permission denied
  1. Enter the following command on the terminal window to provide root permission for Pin no 22: /usr/bin/gpio export 22 out. This command has to be executed everytime you restart the Raspberry Pi unless you provide root permissions to the account which could be done by setting a value on the root configuration file.
  2. Now run the dotnet console app using the command dotnet run which will turn on the LED light connected to PIN 22. The output voltage on PIN will be 3.2 volt if the pinvalue is set to High and will be Zero if its set to Low.
  3. Code controller.Write(22, PinValue.Low); will turn off the light
  4. To remotely debug on the Linux Arm, follow the instruction on this article.
  5. To disconnect in VS code, click File > Close Remote Connection

Console application connected to Azure IoT hub:

There is a .NET SDK for Microsoft Azure IoT to enable development using .NET and we will be using the package Microsoft.Azure.Devices.Client to connect client devices to Azure IoT hub. The other package used in this project is System.Devices.Gpio.

Use the Hostname of Azure IoT Hub, Device ID, Primary key of the Device copied earlier during the setup and the GPIO pins as shown below

private const string IotHubUri = “YourIoTHub.azure-devices.net”;
private const string deviceKey = “Your Key”;
private const string deviceId = “Your device ID”;
private const int Pin1 = 17;
private const int Pin2 = 18;

In the code the

  • Method deviceClient.ReceiveAsync() receives a message from the IoT hub queue
  • Method Encoding.ASCII.GetString(receivedMessage.GetBytes()) reads the message
  • Method deviceClient.CompleteAsync(receivedMessage, _ct) deletes the message from the queue

Do not forget to run the commands /usr/bin/gpio export 17 out and /usr/bin/gpio export 18 out based on the pins you are controlling. Then run the dotnet application using the command dotnet run. Now send a message from the Device explorer or from Azure portal IoT device explorer ON1 or Off1 to turn On/Off the LED connected to PIN 17 and ON2 or Off2 to turn On/Off the LED connected to PIN 18.

On this example we have used Cloud to device messages which sends a one way notification but you can also use Direct Methods & Device Twin to control devices, go through the following documentation from Microsoft with guidance to send cloud to device communications using different methods

https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-c2d-guidance

Find here the sample of the console application from GitHub. Now the console application is ready, let us create the Azure Function app to use it in the PowerApps.

Azure Function App – HTTP Trigger:

I’ve used a Consumption plan Function app which gets triggered on a HTTP request to send a message to IoT hub registered device using the method ServiceClient.SendAsync from the package Microsoft.Azure.Devices to send a one way notification to the registered Raspberry Pi device. The message will be sent to the device on the HTTP request as query string (Parameter name: name)

  1. Create a Function App from the Visual Studio 2019, I’ve used VS 2019 but you can also use VS Code
  2. Add the HTTP trigger with authorization Level of the Function App as Function
  3. Add the Nuget Package Microsoft.Azure.Devices
  4. Have the connection string handy for the iothubowner policy used on the device explorer.
  5. Copy the following Code:
 using System;
 using System.IO;
 using System.Threading.Tasks;
 using Microsoft.AspNetCore.Mvc;
 using Microsoft.Azure.WebJobs;
 using Microsoft.Azure.WebJobs.Extensions.Http;
 using Microsoft.AspNetCore.Http;
 using Microsoft.Extensions.Logging;
 using Newtonsoft.Json;
 using Microsoft.Azure.Devices;
 using System.Text;
 using System.Net;
  
 namespace FunctionApp_IoT
 {
     public static class Function1
     {
         static ServiceClient serviceClient;
         static string connectionString = "HostName=YourIoTHub-env.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=Yourkey";
         static string targetDevice = "Your Device ID";
         [FunctionName("Function1")]
         public static IActionResult Run(
             [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
             ILogger log)
         {
             log.LogInformation("C# HTTP trigger function processed a request.");
  
             string name = req.Query["name"];
  
             serviceClient = ServiceClient.CreateFromConnectionString(connectionString);
  
             SendCloudToDeviceMessageAsync(name).Wait();
             
             return new OkObjectResult(new { status = "Light turned On or Off" });
             
         }
         private async static Task SendCloudToDeviceMessageAsync(string condition)
         {
             var commandMessage = new
              Message(Encoding.ASCII.GetBytes(condition));
             await serviceClient.SendAsync(targetDevice, commandMessage);
         }
     }
  
 } 
  1. Publish the function app to Azure. Test it by sending HTTP requests using Postman tool or browser. The function API is ready, we are now ready to call the function in PowerApps
  2. Function app URL will be https://yourfunctionappsubdomain.azurewebsites.net/api/Function1?code=authorizationcode. Since I’ve chosen authorization level as function there will be a code

PowerApps:

So far we have progressed till a HTTP API endpoint using serverless which sends a message to the Raspberry Pi through the IoT hub, if we have to call this API on PowerApps we will have to create a custom connector which allows you to connect to any RESTful API endpoint. Bear in mind that to use a PowerApp which has a custom connector, the users should have a premium license.

Custom Connector:

Let us go ahead and create the custom connector, you can find here on the GitHub repo for the swagger definition file for creating the custom connector. Download the file and go to your Power Platform environment and click to the Custom Connectors link under Data.

Click the Import an OpenAPI file under New custom connector and import the Swagger definition file you have downloaded from the repo

Once it is imported, change the host on the General tab based on the function app URL and the Security will have authentication type as API Key and on the Definition tab there will be one action which be called from the PowerApps to control devices. After the settings are configured you can create the connector by clicking the link Create connector. You can test the connector by creating the connection by passing in the Code parameter of the function app and pass the message to test the operation. Make sure the console app is running in order to receive the message & to turn on/off the device.

PowerApps Canvas App:

Once the custom connector is created you can use it on the PowerApps canvas app by creating a connection to the connector like below

After the connection is created and added on the app, you can use on the PowerApps controls like Toggle, button etc to turn on/off the devices using the code ‘IOT-ControlDevice’.ControlDevice({name: “on1”}) / ‘IOT-ControlDevice’.ControlDevice({name: “off1”}). If a toggle control is used, the code will be something like this

If(
    ToggleL1.Value = true,
    'IOT-ControlDevice'.ControlDevice({name: "on1"}),
    'IOT-ControlDevice'.ControlDevice({name: "off1"})
)

Voila, now you were able to control devices from PowerApps.

Summary: On this post we have seen how to integrate Azure IoT with PowerApps and to control devices through a PowerApps. This is just a sample, you can extend this example based on your needs. Hope you have found this informational & interesting. Let me know any feedback or comments on the comment section below

Reference:

https://www.hanselman.com/blog/visual-studio-code-remote-development-over-ssh-to-a-raspberry-pi-is-butter

https://edi.wang/post/2019/10/6/azure-remote-controlled-light-with-net-core-30-on-raspberry-pi

https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-csharp-csharp-c2d

https://github.com/Microsoft/vscode-azure-iot-toolkit/wiki/Quickstart-.NET

https://docs.microsoft.com/en-us/connectors/custom-connectors/define-blank

https://jussiroine.com/2020/06/developing-remotely-on-raspberry-pi-4-and-linux-using-visual-studio-code

Tools to call Microsoft Graph API endpoints as a User and application

This blogpost will help you to explore and interact with MS graph API endpoint’s using the following tools

  • Postman client
    • Signed in as a user/On-behalf-of API call (Delegated permission)
    • Application/daemon API call (Application permissions)
  • Graph Explorer

I have used MS graph extensively on different MS cloud services like SharePoint, PowerAutomate, PowerApps, Azure services like Azure functions and on devices like Raspberry Pi. It is a very powerful service in Microsoft 365 platform. Let start with some basics

Introduction:

MS Graph API is a RESTful web API which enables you to access different Microsoft 365 cloud service resources through its unified programmability model.

Microsoft Graph exposes REST APIs and client libraries to access data on the following Microsoft cloud services:

  • Microsoft 365 services: Delve, Excel, Microsoft Bookings, Microsoft Teams, OneDrive, OneNote, Outlook/Exchange, Planner, SharePoint, Workplace Analytics.
  • Enterprise Mobility and Security services: Advanced Threat Analytics, Advanced Threat Protection, Azure Active Directory, Identity Manager, and Intune.
  • Windows 10 services: activities, devices, notifications, Universal Print (preview).
  • Dynamics 365 Business Central.

Permission Types:

MS Graph exposes granular permissions that controls the access of the apps that has to the different resources like sites, users, groups etc. There are two types of permission

  • Delegated permissions are used by apps that have a signed-in user present. For these apps, either the user or an administrator consents to the permissions that the app requests and the app can act as the signed-in user when making calls to Microsoft Graph.
  • Application permissions are used by apps that run without a signed-in user present. For e.g Apps that run as background services or daemons. Application permissions can only be consented by an administrator.

Access token:

To call a MS Graph API all you need is an access token in the authorization header of an HTTP request.

GET https://graph.microsoft.com/v1.0/me/ HTTP/1.1

Host: graph.microsoft.com

Authorization: Bearer EwAoA8l6BAAU … 7PqHGsykYj7A0XqHCjbKKgWSkcAg==

The access tokens are issued by the Microsoft identity platform which contains information to validate if the requestor has appropriate permissions to perform the operation they are requesting. An active directory app is a pre-requisite to generate an access token to call a Graph API endpoint.

There are also Microsoft identity platform authentication libraries for .NET, JS Android, Objective-C, Python, Java, Angular facilitating validation, cookie handling, token caching and on maintaining a secure connection. Let’s now go ahead and see the tools

MS Graph Explorer:

Graph explorer is a web-based tool which can be used to build and test requests using Microsoft Graph API. The explorer can be accessed from the following URL:

https://developer.microsoft.com/en-us/graph/graph-explorer

There will be a default Active directory application on the Organizational Active directory of the M365 tenant by the name Graph Explorer with application id de8bc8b5-d9f9-48b1-a8ad-b748da725064. This app can be accessed from the Enterprise applications blade of the Active directory as shown below

Delegated permissions are used by Graph Explorer. Based on your access role & admin consent’s you would be able to call different Microsoft Graph API from this tool. After you have signed into the Graph Explorer tool, the access token will be generated automatically

To view the token information, copy the token and paste it on the utility https://jwt.ms/

If your token has a scp (Scope) claim, then it’s a user based token (Delegated permissions). It is a JSON string containing a space separated list of scope the use has access to call different graph endpoints.

Postman Client:

Postman is a tool that can be used to build and test requests using the Microsoft graph API’s. To use this tool for testing the Graph API endpoint’s, register an app in Azure Active directory as per the instructions from this blog post. Provide the permission (Delegated & Application) as per your need to test it using Postman.

Copy the client id, client secret & tenant ID of the registered app. To access the various endpoints like authorization and token, click on the Endpoints from the Overview section of the Active directory app.

Setting up the environment using Postman collections:

There are Postman collections with many MS graph API requests created by Microsoft for us to explore. Import the collections and setup the environment (Client ID, Client secret, tenant id) for Application API calls and on-behalf-of API calls as per the instruction from the following article

https://docs.microsoft.com/en-us/graph/use-postman

Application API Token:

To generate an application token, make a POST request to Get App-Only Access Token from the collection Microsoft Graph. The grant_type is client_credentials since it is Application permissions.

Token Validity:

The token is valid for 3599 seconds which is 1 hour. Post that the token will expire, you will have to regenerate the token by making another call.

The AccessToken (Application API call) will be generated and automatically stored on the Environment (Microsoft Graph environment) AppAccessToken with the help of a script on the Tests tab in Postman. Copy the access token value & paste it on the utility https://jwt.ms/. Find the decoded token below which has information like the Application ID/client id of the AD app, display name and roles to which the app has access to poll the graph endpoint.

Graph API call:

The call to the Graph should have the bearer token

Signed-in user/on-behalf-of API Token:

To generate a Signed-in user token, make a POST request to Get user Access Token from the collection Microsoft Graph. The grant_type is password since it is delegated permissions.

The AccessToken (Signed-in user API call) will be generated and automatically stored on the Environment (Microsoft Graph environment) UserAccessToken with the help of a script on the Tests tab in Postman.

Copy the access token value & paste it on the utility https://jwt.ms/. Find the decoded token below which has information like the Application ID/client id of the AD app, display name and scopes (scp) to which the app has access to poll the graph endpoint. If you remember the Application API token had roles & not scopes, so this is how you can identify the token type.

Storing the production User ID and password is not recommended on the Environmental variables since the information is stored in Postman but this can be handled by generating an access token from the request Authorization tab, set the type as OAuth 2.0 and click Get New Access Token button

Fill in all the information gathered from the App in Azure AD like Appid, Secret, Endpoints (Authorization and Token), state can be any random value

Click Request token, this will prompt the user to enter the Username and password. After authentication, it will generate the token which could be used further to make API calls.

Graph API call:

The call to the Graph should have the bearer token on the Authorization tab or on the Headers tab

Summary: On this post we have seen how to use tools like Graph explorer & Postman to test different MS graph API endpoints. You can make requests like GET, POST, PUT, PATCH, DELETE based on its availability. Refer to the Microsoft documentation for v1.0 and beta endpoints. Once you have explored & tested the API, you are ready to use on applications using the available SDK’s for different programming languages. Let me know any feedback or comments on the comment section below

Collect response from multiple users with Adaptive Card in Teams using Power Automate

This post is in response to a comment in one of the most viewed article from my blogsite to post an Adaptive card to an user in Teams using PowerAutomate. Assume we have a use case for using Adaptive card for collecting response from n number of users based on the data from an Excel, SQL database etc. The response must be unique for users so there has to be separate instance of Adaptive card flow to each user since the flow has to wait till it gets response from the user.

To handle this scenario, we are going to create two flows

  1. Flow 1 – Send Adaptive card to collect response: This flow creates an adaptive card to collect response from each user
  2. Flow 2 – Microsoft Teams User Details: The main flow which has the user details

For this example, I will be storing the user details on an Array variable but you can dynamically generate user details or based on the data from various datasources like Excel, Database etc. Let us go ahead and create the flows

Flow 1 – Send Adaptive card to collect response

This flow will be called from flow 2 to create the Adaptive card for the team user to collect response.

Step 1: Create an Instant flow with trigger type “When a HTTP request is received” and select the method type to Post by clicking Show advanced options. Now click Use sample payload to generate schema under the section Request Body JSON Schema and the enter the following data for the team user email address and click Done to generate the schema

{
“Email”:”user@domain.onmicrosoft.com”
}

The email address of the Teams user will be passed from Flow 2 on the request body.

Step 2: Add the action Post an Adaptive card to a Teams user and wait for a response. The only change is for the field Recipient which should be Email (request body json schema) from the dynamic content of the trigger When a HTTP request is received.

Step 3: Add Create item for collecting the Team user response to the SharePoint list. Refer to the blogpost Adaptive card to an user in Teams using PowerAutomate for detailed explanation.

Step 4: Saving the flow automatically generates the HTTP POST URL, the URL will be used in the Flow 2. The complete flow should be looking like the below

We are now good to create the second flow from where the Adaptive card collect response flow will be triggered from.

Flow 2 – Microsoft Teams User Details:

This flow is the primary flow which triggers the Flow 1 for the posting the adaptive card to multiple team users.

Step 1: Create an Instant flow with the trigger type “Manually trigger a flow” and add a Array variable to store the user email address for sending the Adaptive card to collect response from multiple users.

Step 2: Add the Parse JSON action to parse the email address from the array variable and then click Generate from sample

Paste the array data as given below and click Done to automatically generate the schema for us. Then for the Content parameter in the action, select Teams Users (array variable) from the dynamic content.

[
{
“Email”: “user1@domain.onmicrosoft.com”
},
{
“Email”: “user2@domain.onmicrosoft.com”
}
]

Step 3: Add a compose action and the select the email attribute from the Parse JSON output to automatically generate a Apply to each loop as below

Step 4: Add the HTTP action to make a Post request to the HTTP url created from the first flow to post an Adaptive Card to the teams user. Find the parameters below

Method: Post

URI: HTTP Request flow URL (when a HTTP request is received) copied from the Flow 1

Headers: Key: Content-Type Value: application/json

Body:

{

  “Email”: Output of JSON Parse action (Email)-to be replaced

}

Authentication: None

This should now create Adaptive card to collect responses from multiple users irrespective of the users response to the Adaptive card.

Summary: On this post we have seen how to send adaptive card to multiple teams users using Power automate. There should be a question? Why cannot we use a Child flow concept to call the Adaptive card from the parent flow using the action Run a Child Flow available in Power platform solutions. Since we are using a For Each loop in Flow 2 Step 3 it will go to the next loop only if the first user responds to the adaptive card since there will be an action Respond to a PowerApp or flow at the end of a child flow (must have in child flow). We will have to keep in mind about the action (HTTP) and triggers (When a HTTP request is received) used in this flow are Premium. Let me know any feedback or comments on the comment section below

How to find the Operating System and stack of an existing Azure website

Azure websites also known as app service can be easily created through multiple interfaces like Azure Power shell, CLI, Azure portal etc. While creating the website or App Service you will have to select the operating system, default choice is Windows. OS selection also depends on the runtime version you select (.NET framework, .Net core, Java etc). If you have an existing website connected to an App service plan, by the time I write this post you can’t find the OS information on the Overview or configuration section of the App service or App service plan. Let us see how we can find it

Option 1:

In the Azure portal, select the App service plan of the Azure Website. In the app service plan left menu, select App under settings blade. For Windows OS, it will be as app for type

For Linux website, it will be as app, linux for type

Option 2:

In the Azure portal, select the App service. In the left menu, select Advanced Tools under Development Tools blade. Click go

Once the Advanced Tools is opened, click Environment on the Top Left corner. You can find the OS under System Info. For Windows App service

For Linux App service:

To know the runtime stack, In the Azure portal, select the App service. In the left menu, select Configuration under Settings blade. Click General Settings

Hope you have found this informational. Sharing is caring

Hosting static HTML content in SharePoint Online site & Azure

The SharePoint Online experience which you get by default for all the sites you create in the tenant is modern by default. The site pages you create in the modern experience are fast, easy to author and support rich multimedia content. The pages look great on any experience i.e. mobiles, browser, SharePoint App. If you wanted to host static HTML content with JavaScript, CSS, BootStrap on a SharePoint Online site it is not feasible though it was easily doable with Classic SharePoint site. The reason is by default you are not allowed to run custom scripts to change the look & feel & behaviour of the sites for security reason in a Modern SharePoint Online site. But we have control to manage this setting at different levels

  1. Organizational Level
  2. Site Level

On this blog post let’s see how to host static content (HTML, JS, CSS, Images et) by updating the site scripts settings at the site level. At the end I write some options to host Static content in Azure.

Pre-requisite:

  1. Modern SharePoint Communication Site
  2. SharePoint Online Tenant Admin access for executing few PowerShell commands
  3. HTML Content
  4. Access to Azure Subscription as a Contributor to test static content hosting in Azure

Hosting Static content on a SharePoint Online Site:

For sample HTML content I’ve downloaded from the following Azure Sample GitHub repo

https://github.com/Azure-Samples/app-service-web-html-get-started

Step 1:

Connect a SharePoint Online administrator to a SharePoint Online connection. This cmdlet must run before any other SharePoint Online cmdlets can run

Connect-SPOService -Url https://domain-admin.sharepoint.com

Step 2:

Run a Power shell command to disable the property DenyAddAndCustomizePages at the site level by running the following command

Set-SPOsite https://domain.sharepoint.com/sites/sitename -DenyAddAndCustomizePages 0

Step 3:

Verify if DenyAddAndCustomizePages is Disabled. To check this the property value run the following command

Get-SPOSite -Identity https://domain.sharepoint.com/sites/sitename -Detailed | select DenyAddAndCustomizePages

Step 4:

Be ready with the HTML sample. I’ve downloaded static content from the Azure HTML Sample github repo which has

  • HTML
  • CSS
  • JavaScript

If there is any file with HTML extension, rename the extension to .aspx. On this sample there was 1 HTML file by the name index.html, I’ve renamed the file index.html to index.aspx

Step 5:

Open the SharePoint Online Communication site in the browser & navigate to the Document library. I’ve chosen the default document library (Shared Documents) for the storing the HTML, you can also create a custom document library, site assets library.

Upload the folder which has the .HTML file renamed to .aspx and the supporting files (JS, Images, CSS etc)

After the upload

Click the index.aspx file, it should render the file with HTML, CSS, JS etc as shown below

The URL of the HTML page will be in the following structure for the index.aspx file

https://domain.sharepoint.com/sites/sitename/Shared Documents/HTML_sample_for_Azure_App_Service/index.aspx

Step 6:

You can now Enable the property DenyAddAndCustomizePages by executing the following SharePoint Online PowerShell cmdlet

Set-SPOsite https://domain.sharepoint.com/sites/sitename -DenyAddAndCustomizePages 1

If you wanted to add another HTML file after the above command, you will have to disable the property DenyAddAndCustomizePages before adding the HTML file. I’ve shown you how to host static HTML on SharePoint Online site which will not cost you anything provided there is Microsoft 365 license. If you need additional features like Custom domain, anonymous access, deployments etc you can do so with Azure.

Static Content in Azure:

There are couple of options in Azure to host your HTML as shown below

  1. Azure App service
    • You can create an App service in Azure to host your static HTML. There is Microsoft documentation with detailed instruction to set this up. You can lot of options with App service like Auto scaling, Custom domain, Anonymous access, auto deployments etc. There is also a Free pricing tier F1 for hosting your content.
  2. Azure Static Webapps
    • As of now the service is in Preview mode which automatically builds and deploys full stack webapps to Azure from Github repository. During preview, its free of cost. I’ve recently tested this, if you wanted to try go through this documentation.
    • VS Code extension for Static Webapps
    • You can also serve dynamic content with Azure functions integration.
  3. Azure Storage
    • This service also has capability to serve static content (HTML, CSS, JS & image) from the blob container. To know more, check this documentation from Microsoft.

Summary: On this post we have seen options to host static content in SharePoint Online site & Azure. Based on your requirement (Anonymous access, custom domain, cost etc) you can choose one from the options given above. Hope you have found this informational & helpful in some way. If there is some other option to host static content, please let me know on the comment section below

How to use a sample PCF component in your Power Apps

If you are PowerApps developer and wanted to extend the capabilities by bringing in third party or community driven PCF (Power Apps Component Framework) components, you can find lot of samples from the Power Apps community website PCF.gallery, Power Apps Community and from Microsoft for Model driven and Canvas apps.

Sample components from Microsoft

If you are new to component framework, I recommend going through the documentation from the following link:

https://aka.ms/pcfdocs

The PowerApps component framework enables the developers to create code components for model-driven and canvas apps. I have recently used a control from the PCF gallery community site, let’s see how to package and deploy a sample control to the Power Apps environment and then consume it on your Canvas app. There are two methods to deploy a code component:

  1. Import the solution in to CDS
  2. Power Apps CLI

To follow along the blog post, have the following available and installed on your environment

  1. Install Power Apps CLI and Node.js
  2. Access to Power Apps CDS Environment
  3. Developer Command prompt for Visual Studio 2017 or 2019
  4. Power Platform Administrator
  5. Enabling the PowerApps component framework on canvas applications

Method 1: Import the solution in to CDS:

For this post, I have chosen the React Face pile component from Microsoft Power Apps samples github repo. Follow the steps to create the solution ZIP file to be imported on the solutions gallery. If you already have the solution package, directly proceed to the Step 10.

Step 1: Download as a ZIP package and extract to a folder on your computer or git clone from the Microsoft Github repository. I have downloaded on C:\ PCF\Controls\sample-controls

git clone https://github.com/microsoft/PowerApps-Samples.git

Step 2: Open the Developer command prompt and navigate to the folder on the computer where you have downloaded the React Face pile component using the cd folder-path-react-facepile-component command e.g folder-path: C:\ PCF\Controls\sample-controls\PowerApps-Samples\component-framework\TS_ReactStandardControl

Step 3: Install all the required dependencies by running the command npm install

Step 4: Create a folder (e.g ReactStandardControlSolution) on the root of the React face pile component project (e.g C:\ PCF\Controls\sample-controls\PowerApps-Samples\component-framework\TS_ReactStandardControl) either manually or using the command mkdir ReactStandardControlSolution

Step 5: Navigate to the created folder by using the command cd ReactStandardControlSolution

On your command prompt, you should now be on e.g C:\ PCF\Controls\sample-controls\PowerApps-Samples\component-framework\TS_ReactStandardControl\ ReactStandardControlSolution

Step 6: Create a new solution project using the following command. The solution project is used for bundling the code component into a solution zip file that is used for importing into Common Data Service.

pac solution init –publisher-name developer –publisher-prefix dev

The Published-name and publisher-prefix values should be unique to your environment

Step 7: Add the reference using the command shown below. This reference informs the solution project about which code components should be added during the build. The path should to the root of the downloaded react face pile component and not to the newly created solution folder

pac solution add-reference –path C:\ PCF\Controls\sample-controls\PowerApps-Samples\component-framework\TS_ReactStandardControl\

Step 8: To generate the ZIP package, enter the following command

msbuild /t:build /restore

Step 9: The generated ZIP file will be available on \bin\debug\ folder once the build is successful

Note: Make sure there is no spaces on the folders you create to avoid deployment issues

Reference:

https://docs.microsoft.com/en-us/powerapps/developer/component-framework/import-custom-controls

https://docs.microsoft.com/en-us/powerapps/developer/component-framework/use-sample-components

Step 10: Now it’s time to import the solution to the solutions gallery by signing into Power Apps and select Solutions from the left navigation. On the command bar, select import and then browse to the Zip file solution created from the above steps. After the solution is imported successfully, the solution is available to use in Power Apps canvas and Model driven apps.

Reference: https://docs.microsoft.com/en-us/powerapps/maker/common-data-service/import-update-export-solutions

Let’s see the next method to deploy the code component

Method 2: Power Apps CLI:

In the previous method Power Apps CLI was used to generate the solution package and then the solution was imported to the gallery, on this method the code component will be directly pushed to the CDS service instance using the CLI push command.

Step 1: Create an authentication profile to the CDS instance by executing the following command on a command prompt, it’s not necessary to open a VS command prompt.

pac auth create –url https://xyz.crm.dynamics.com

To get the url sign into Power Apps and select your environment which has CDS in the top right corner and the environment you are planning to deploy the code component. Select the settings button in the top right corner and select Advanced settings. Now copy the URL from the webbrowser which should look like below

https://orgchangedhere.crm4.dynamics.com/main.aspx?settingsonly=true

The URL is https://orgchangedhere.crm4.dynamics.com/

Once your profile is successfully created, you should see the following message on your command prompt

Step 2: Navigate to the root folder of the custom component project using the cd folderpath command which has the .pcfproj file (e.g C:\ PCF\Controls\sample-controls\PowerApps-Samples\component-framework\TS_ReactStandardControl)

Step 3: Install all the required dependencies by running the command npm install

Step 4: Run the following command to push the code components to the CDS instance

pac pcf push –publisher-prefix contoso

Note: The publisher prefix that you use with the push command should match the publisher prefix of your solution in which the components will be included.

Reference:

https://docs.microsoft.com/en-us/powerapps/developer/component-framework/import-custom-controls#deploying-code-components

List of common PAC commands

https://docs.microsoft.com/en-us/powerapps/developer/common-data-service/view-download-developer-resources

The component is now ready to be used in the Canvas or a Model driven app after the code deployment using Method 1 or Method 2.

To add the component in a Canvas App:

Follow along then the documentation from Microsoft

https://docs.microsoft.com/en-us/powerapps/developer/component-framework/component-framework-for-canvas-apps#add-components-to-a-canvas-app

Find below the sample controls I’ve added on the Power App canvas app

To add the component in a Model Driven app:

https://docs.microsoft.com/en-us/powerapps/developer/component-framework/add-custom-controls-to-a-field-or-entity

Summary: You can also create a custom component from scratch or extend the functionality from the available samples based on your needs. Hope you have found this informational & helpful in some way. Let me know any feedbacks or comments on the comment section below