Text-to-Speech and Audio Playback in Power Apps using Azure and Power Automate

Capabilities like text-to-speech (TTS) and audio playback can take your applications to new heights of user engagement and accessibility. In this blog post, we’ll look at integrating text-to-speech and audio playback functionalities into Power Apps using Power Automate and Azure Speech Services. Whether you’re looking to provide dynamic narration, streamline communication, or enhance accessibility, this post will walk you through the steps to integrate TTS capabilities into your Power Apps projects.

Prerequisites:

Before you begin, ensure that you have the following prerequisites in place:

  • Maker role in Power Platform environment
  • Premium License – HTTP Connector
  • Azure Subscription Access
    • Azure Speech services – Text to speech

Creating Speech Services in Azure for Text to Speech:

Azure provides Speech Services that enable developers to integrate advanced speech capabilities into their applications, including Text to Speech (TTS). With Azure Speech Services, you can convert text into speech in various languages and voices.

Step 1: Create the resource Speech services in the Azure Portal

Step 2: Copy the Key from the Keys and Endpoint section within the Resource Management blade. This Key is used for authentication when making requests to the Speech service APIs, enabling text-to-speech conversion in the Power Automate flow through the HTTP connector.

Step 3: Go to the Speech Studio to choose a voice from the gallery provided in Text to Speech section. Alternatively, you can create a custom voice using your own audio recordings. The Speech Studio can also be accessed from the Overview section of the Speech service in the Azure portal.

Power Automate Flow to convert the text to speech:

The Power Automate serves as a tool in orchestrating the integration between Power Apps and Azure Speech Services, enabling communication between the components. Create an Instant Power Automate flow with the trigger “PowerApps (V2)” either from the Power Automate portal or directly from the Power Apps maker interface. Add a text input varTextInput as shown below to send the text from the Power Apps

The next step involves converting the text to speech/audio utilizing the Text to Speech REST API through the HTTP connector action. Add the HTTP action with request details as below

Method: POST

URI: Depending on the region where you’ve created the Azure Speech resource, select the corresponding Rest API endpoint from the list in the Microsoft documentation. For instance, if the Speech Service resource is created in West Europe, the URL will be:

https://westeurope.tts.speech.microsoft.com/cognitiveservices/v1

Headers:

Ocp-Apim-Subscription-KeyKeyCopiedEarlierfromtheAzureSpeechResource
X-Microsoft-OutputFormatriff-24khz-16bit-mono-pcm
User-AgentapplicationName
Content-Typeapplication/ssml+xml

Body:

<speak version='1.0' xml:lang='en-US'><voice xml:lang='en-US' xml:gender='Female'
name='en-US-JennyNeural'>
@{triggerBody()['text']}
</voice></speak>

In the request body, add the varTextInput included to the Power Apps trigger. I have added the voice en-US-JennyNeural, you can select it from the voice gallery as discussed above.

Next, add a Compose action to convert the audio generated from the HTTP action into base64 format. This will serve as the text output passed in the Respond to a PowerApp or flow action, as shown below:

Base64AudioContent compose action expression: base64(body(‘HTTP-TexttoSpeech’))

Save the flow.

Power Apps for Text Narration:

Let’s develop the app for the text narration feature, where users can input text to be converted into audio using the Power Automate flow created earlier. On the Canvas, add a Text Input control for entering the desired text, an Audio control to play the audio generated from the Azure text-to-speech service, and a button to trigger the Power Automate flow. Make sure the flow is added to the Power Apps. Add the following code to the OnSelect property of the button

// Reset the Audio1 control to its default state, clearing any previous audio.
Reset(Audio1);

// Run the TexttoSpeechFlow Power Automate flow, passing the text from TextInput1 as input.
// Store the result (converted audio) in the varconvertedAudio variable.
Set(varconvertedAudio, TexttoSpeechFlow.Run(TextInput1.Text));

// Set the playAudioContent variable to false, ensuring that any previous audio playback is stopped.
Set(playAudioContent, false);

// Set the playAudioContent variable to true, triggering playback of the newly converted audio.
Set(playAudioContent, true);

The variable playAudioContent will be used in Audio control Star property to play the audio automatically

The Media property of the Audio control should have the following formula, depending on the output variable added in the ‘Respond to PowerApps or flow’ action of the Power Automate flow

"data:audio/x-wav;base64,"&varconvertedAudio.varaudiocontent

The x-wav is the format of the generated audio from the Text to Speech REST API in the Power Automate flow which can be validated from the output of the HTTP action HTTP-TexttoSpeech

You are now ready to test your app.

Summary:

By combining the power of Power Automate and Azure Speech Services developers can quickly integrate text-to-speech and audio playback functionalities into their Power Apps. Hope you have found this informational & thanks for reading. If you are visiting my blog for the first time, please do look at my other blogposts.

Do you like this article?

Subscribe to my blog with your email address using the widget on the right side or on the bottom of this page to have new articles sent directly to your inbox the moment I publish them.

Automate the assignment of Capacity Add-ons in Power Platform Environment using Power Automate Flow

In Power Platform, capacity add-ons such as AI Builder Credits, Per-App plan, Power Pages Capacity, Power Automate Per Flow, Power Automate Process, Copilot Studio messages are allocated at an environment level and are not tied to individual users, unlike the Power Apps/Power Automate Premium plan. These add-ons are assigned to an environment through the Power Platform Admin Center. However, there may be cases where the allocation of add-ons needs to be automated as part of the license assignment process, leveraging IT service management tools such as ServiceNow, BMC Remedy or any custom tools.

This blog post will explore how to automate the capacity assignment using the Power Platform API, which is currently in preview at the time of writing.

Pre-Requisites:

  • Power Platform Administrator
  • Access to create Entra ID App registration
  • Power Automate Premium – License

Authentication of Power Platform API:

To access the resources available via Power Platform API, the API must be authenticated with a token generated using an Entra ID application. This token is sent as a header along with each API request. Client credentials authentication flow is used with the Service Principal.

Active Directory App registration:

To generate a bearer token, the first step is to register an Active Directory app with the Power Platform API permission to call the API endpoints responsible for assigning capacity to an environment. Once the registration is complete, add the permission Licensing.Allocations.ReadWrite as detailed in the documentation, to assign Capacity Add-ons as shown below

Select the permission as shown below

Admin consent is not required.

Make sure to note the Client ID/Application ID, Client Secret, and Tenant ID associated with the registered application, as these details will be essential for the Power Automate flow.

Registering the Entra ID app as an Admin management Application:

Access for the registered Entra application needs to be granted by a user with the Power Platform Administrator role to be utilized as a Service Principal for calling the capacity allocation API. Use the following PowerShell command to grant the necessary permissions for the App Reg/service principal to invoke the Capacity Addon allocation API.

Add-PowerAppsAccount

New-PowerAppManagementApp -ApplicationId ClientId-EntraIDAppRegistrationClientId

Replace the EntraIDAppRegistrationClientId with the registered Entra ID App reg.

Note: The Service Principal flow doesn’t use application permissions and is instead treated as a Power Platform Administrator for all API calls that they make.

Power Automate Flow:

For testing purposes, I’ve created an Instant Flow. However, select a trigger type that aligns with your specific needs. Add a HTTP connector to generate an access token for calling the API. Find the HTTP request details as below

Request Type: POST

URI: https://login.microsoftonline.com/tenantId/oauth2/v2.0/token

Headers:

Content-Typeapplication/x-www-form-urlencoded
Acceptapplication/json

Body:

grant_type=client_credentials&client_id=clientID&client_secret=secretfromEntraIDAppReg&scope=https://api.powerplatform.com/.default

Make sure to replace the tenantId, ClientID & secretfromEntraIDAppReg in the HTTP request.

Add a compose action with the following expression to extract the access token from the above HTTP request

body('nameOfTheHTTPConnectorAction').access_token

Add another HTTP action to assign the capacity using Currency Allocation by Environment API. Find below the HTTP request details:

Request Type: PATCH

URI: https://api.powerplatform.com/licensing/environments/environmentID/allocations?api-version=2022-03-01-preview

Headers:

AuthorizationBearer Outputs(’AccessTokenComposeAction’)

Body:

{

"currencyAllocations": [
{
"currencyType": "AI",
"allocated": 150
}
],
"environmentId": " environmentID"
}

The provided http request body pertains to AI Builder credit allocation. For other capacity types like Per App plan, Copilot Studio, and Power Pages follow the currency type information outlined in the following documentation:

https://learn.microsoft.com/en-us/rest/api/power-platform/licensing/currency-allocation/patch-currency-allocation-by-environment

Make sure to replace the environmentID in both the URI and the Body accordingly.

Test the flow, the environment will have AI Builder 150 credits allocated.

To get existing capacity assignments on an environment, make a GET request to the following endpoint

https://api.powerplatform.com/licensing/environments/environmentID/allocations?api-version=2022-03-01-preview

Summary:

This capability opens doors to enhanced license assignment processes for Power Platform, offering an approach for managing and optimizing Power Platform addons through automation. Hope you have found this informational & thanks for reading. If you are visiting my blog for the first time, please do look at my other blogposts.

Do you like this article?

Subscribe to my blog with your email address using the widget on the right side or on the bottom of this page to have new articles sent directly to your inbox the moment I publish them.

Streamlining Integration: Using Azure Managed identities in Power Apps and Power Automate to access Microsoft Graph API – Part 1

Using Microsoft Graph in Power Apps and Power Automate offers several advantages for streamlining integration with various Microsoft 365 services and applications. Additionally, securing these integrations with Azure Managed Identities significantly enhances the overall security of the solution. Azure Managed Identities enable applications and services to authenticate with Azure services seamlessly and securely. When it comes to using Microsoft Graph API, you don’t need a client secret anymore. This makes it simpler to manage and keeps everything more secure. This blog series, divided into multiple articles, is dedicated to utilizing managed identities either System Assigned or User Assigned in Power Platform to access MS Graph API endpoints. It leverages Azure API Management service with the support of a custom connector. The focus of this particular article is on configuring the Azure API management service with the necessary Microsoft Graph permissions for the managed identity.

Pre-requisites & permissions:

Here are the resources and permissions required to follow along this blogpost:

Azure Subscription/Entra ID:

You need an Azure subscription to create and manage Azure API Management instances.

  • Azure Managed Identity – User or System Assigned:
    • Create or use an existing Azure Managed Identity. This can be either a user-assigned identity or a system-assigned identity of the APIM resource, depending on your requirements.
  • Global Admin or Privileged Administrator Role:
    • The user should have Global Administrator or Privileged Administrator role in the Microsoft Entra ID to grant Admin consent for the MS graph permissions on the Managed Identity using Microsoft Graph PowerShell SDK.

Power Platform Environment:

Set up a Power Platform Environment where you plan to create and use custom connectors.

  • Role:
    • Ensure that the user/maker has the System Administrator/Customizer/Maker role on the Power Platform Environment. This role is required to create custom connectors.
  • DLP Policy:
    • Make adjustments to allow custom connector endpoints, especially in cases where endpoints are blocked by the tenant scoped DLP policy.
  • License:
    • A Power Apps or Power Automate Premium license is necessary for creating and using custom connectors. Ensure that the user has the required premium license assigned.

Azure API Management Setup with Microsoft Graph Permissions:

Create a Azure API Management resource and turn on System assigned managed identity and if available, add a User Assigned Managed Identity in the Security section of the API Management instance. Execute the following PowerShell script which user Microsoft Graph SDK to add the permission User.Read.All to either the System or User assigned managed identity. Adjust the permissions as needed for your specific requirements. Before executing the script, replace the permission and the display name of the Managed identity depending on the managed identity you have used. If you have used a System Assigned managed identity, ensure that it corresponds to the display name of the API Management instance.

# Install Microsoft Graph PowerShell module if not already installed

$PermissionName = "User.Read.All"
$DisplayNameOfMSI = "replaceherewithactualnameofManagedIdentity"
$GraphAppId = "00000003-0000-0000-c000-000000000000"

# Connect to Microsoft Graph
Connect-MgGraph -Scopes "Directory.ReadWrite.All","AppRoleAssignment.ReadWrite.All"

# Get Managed Identity Service Principal

$MSI = (Get-MgServicePrincipal -Filter "displayName eq '$DisplayNameOfMSI'")
# Sleep for a while to allow time for service principal creation if needed

Start-Sleep -Seconds 10
# Get Microsoft Graph Service Principal
$GraphServicePrincipal = Get-MgServicePrincipal -Filter "AppId eq '$GraphAppId'"

# Retrieve the App Role from the Microsoft Graph Service Principal based on the specified Permission Name
$Role = $GraphServicePrincipal.AppRoles | Where-Object {$_.Value -eq $PermissionName}

# Create an App Role Assignment HashTable for assigning the role to the Managed Identity
$AppRoleAssignment = @{

principalId = $MSI.Id

resourceId = $GraphServicePrincipal.Id

appRoleId = $Role.Id }

# Assign the Graph permission
New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId $MSI.Id -BodyParameter $AppRoleAssignment

You can download the above script from this link. If you prefer using the Azure AD PowerShell module, keep in mind that it is planned for deprecation. In such a case, you can get the script from this link.

Note: You’ll discover equivalent commands for Microsoft Graph PowerShell as opposed to the Azure AD PowerShell cmdlets on the following link

https://learn.microsoft.com/en-us/powershell/microsoftgraph/azuread-msoline-cmdlet-map?view=graph-powershell-1.0

Upon successful execution of the script, the following message will be displayed

The permission granted to the Managed identity can be validated from the Entra ID portal as shown below

Configure the Microsoft Graph API endpoint in API Management and configure policy:

SettingValue
Display namemsgraph
Web service URLhttps://graph.microsoft.com/v1.0
API URL suffixmsgraph

In the API Management instance, on the left menu select APIs > + Add API. Select HTTP and enter the following settings. Then select Create.

System Assigned Managed Identity:

Navigate to the newly created API and select Add Operation. Enter the following settings for accessing the API through System Assigned Managed Identity (SAMI) and select Save.

SettingValue
Display namegetUsrProfileSAMI
URL for GET/users/{User (UPN)}

Select the Operation getUserProfile. In the Inbound processing section, select the (</>) (code editor) icon to use the authentication-managed-identity policy to authenticate with the Microsoft Graph API endpoint using the Managed Identity. This policy uses the managed identity to obtain an access token from Microsoft Entra ID for accessing the specified graph resource.

Replace with the following code in the inbound node:

<inbound>
<base />
<authentication-managed-identity resource="https://graph.microsoft.com" />
</inbound>

User Assigned Managed Identity:

If you prefer to utilize a user-assigned managed identity, click on Add Operation, input the specified settings for accessing the API via User Assigned Managed Identity (SAMI), and then click Save

SettingValue
Display namegetUsrManagerUAMI
URL for GET/users/{User (UPN)}/manager

The Inbound processing section should have the following code

<inbound>

<base />
<authentication-managed-identity resource="https://graph.microsoft.com" client-id="ReplaceitwiththeAppcationIdoftheUAMI" />
</inbound>

You would now be able to test the Graph API endpoint for both the identities from the Test tab.

Add CORS Policy to API in API Management:

CORS settings allow or restrict web applications or services hosted on different domains from making requests to your API. If you want to enable cross-origin requests to the configured Graph API’s from Power Platform Custom connector, you need to configure CORS settings in the API Management service. In the left menu, select APIs and select the API that you will export as a custom connector. If you want to, select only an API operation to apply the policy to.

In the Policies section, in the Inbound processing section, select + Add policy. Select Allow cross-origin resource sharing (CORS).

Add the following Allowed origin: *

Select Save.

I have added * which allows all URL’s but you can be specific by adding the only the relevant URL’s such as https://make.powerapps.com, https://make.powerautomate.com etc

Reference: https://learn.microsoft.com/en-us/azure/api-management/enable-cors-power-platform#add-cors-policy-to-api-in-api-management

Summary:

Up to this point, we have set up the API Management instance with Graph API endpoints for both System Assigned and User Assigned identities. In the upcoming article, we will delve into exporting the API to the Power Platform as a custom connector, implementing security through API key authentication. Hope you have found this informational & thanks for reading. If you are visiting my blog for the first time, please do look at my other blogposts.

Do you like this article?

Subscribe to my blog with your email address using the widget on the right side or on the bottom of this page to have new articles sent directly to your inbox the moment I publish them.

Streamlining Integration: Leveraging Service Principal Authentication for SQL Connector in Power Apps and Power Automate

In the ever-evolving landscape of business processes and data management, efficient integration is the key to success. Securing and managing connections in Power Apps and Power Automate is a critical aspect of integration. This blog post delves into how to use Service Principal authentication to create a connection for Azure SQL Server database with the SQL Server connector in Power Apps and Power Automate. The other supporting authentication types for the SQL Server connector are Azure AD Integrated, SQL Server Authentication, and Windows Authentication.

Prerequisites:

  • An existing Azure SQL Database deployment with Owner role.
  • Access to an existing Microsoft Enterprise tenant for creating an Azure AD App registration.

Setting up the Service Principal:

Let’s headover to the Microsoft Entra Admin center to register an AD application. To register an app, you need to either be a Microsoft Entra admin or a user assigned the Microsoft Entra ID Application Developer role.

To register your application:

In the Azure portal, select Microsoft Entra ID > App registrations > New registration (Microsoft only – Single Tenant)

Retrieve the Client ID, Tenant ID, Display name from the Overview section of the Azure AD app, and then proceed to generate a Secret within the Certificates & secrets section under the Manage blade. Once the secret is generated, copy its value

Granting SQL Roles to Service Principal in Azure SQL Database:

Now that the service principal is created, you can grant an SQL role either from SQL Management Studio or the Azure Portal. In this post, I have used the Azure portal. Follow these steps:

  1. In the Azure portal, navigate to your SQL database’s Overview page.
  2. From the left menu, select “Query editor (preview).”
  3. Connect to the database using either SQL Server Authentication or Microsoft Enterprise Authentication.
  4. In the query window, execute the following script to create a new user in the SQL Server database authenticated with the Azure AD provider.
  5. Run a second query to add the newly created user to the “db_owner” database role. You can assign the role based on your specific requirements.
CREATE USER [PPServicePrinicipal-AzureSQLServer-DisplayNameoftheServicePrincipal] FROM EXTERNAL PROVIDER
GO
EXEC sp_addrolemember 'db_owner', [PPServicePrinicipal-AzureSQLServer- DisplayNameoftheServicePrincipal]
GO

Create Connection:

The service principal has access to the Azure SQL database, let’s proceed to create the connection using the SQL Server connector. In the Power Apps maker portal, navigate to Connections and click on + New Connection as shown below:

From the connectors list, choose SQL Server, and then select the Authentication type as Service Principal (Azure AD application). Enter the Tenant ID, Client ID, and the secret that you copied earlier for the service principal. Finally, click Create, as shown below:

The connection has now been successfully created and is ready for use in Power Apps and Power Automate.

Use the connection in Power Automate Flow:

In the Power Automate Portal, begin by creating an Instant flow. Add the Get Rows action from the SQL Connector and ensure that you’ve selected the connection associated with the Service Principal created earlier.

For the Server name, choose Enter custom value, and enter the Azure Server name in the format serverName.database.windows.net. For the Database name, select ‘Enter custom value’ and enter the Database Name. As for the Table, it may automatically load, or you can select ‘Enter custom value’ and specify it as [dbo].[TableName].

Execute the flow, and it should run successfully. While I’ve tested it with a Trigger (When and item is created etc) and it didn’t work, I will provide an update here as soon as I gather more information.

Use the connection in Power Apps:

Begin by creating a blank Power App from the Power Apps maker portal. Add the SQL connector from the Data section in the left navigation bar, and select the SQL connection you have created earlier. Provide the SQL server name and the database name, then click Connect. This will allow you to select tables and create the data source connection.

Add a Gallery control and then test it.

Caveats:

References:

https://learn.microsoft.com/en-us/connectors/sql/#service-principal-azure-ad-application

https://learn.microsoft.com/en-us/azure/azure-sql/database/authentication-aad-service-principal-tutorial?view=azuresql#create-the-service-principal-user-in-azure-sql-database

Summary:

In this blog post, I’ve shown how to utilize Service Principal authentication with the SQL Server connector in Power Automate and Power Apps. While there are still some limitations, it’s encouraging to see that Microsoft is actively working to expand the capabilities of Service Principal authentication. If you found this post helpful, you might also be interested in my previous article, where I discuss the use of Service Principal authentication with custom connectors via the Graph API. Hope you have found this informational & thanks for reading. If you are visiting my blog for the first time, please do look at my other blogposts.

Do you like this article?

Subscribe to my blog with your email address using the widget on the right side or on the bottom of this page to have new articles sent directly to your inbox the moment I publish them.

Streamlining Integration: Using Service Principal authentication on Custom connectors with Microsoft Graph Application Permissions

Microsoft recently announced a long awaited feature: support for Service Principals in Custom connectors which is currently in Public Preview. This empowers you to authenticate as a service principal instead of relying on user accounts. It’s a game-changer that paves the way for a multitude of scenarios, especially those requiring seamless, uninterrupted access for automated processes, free from the constraints of user involvement.

In one of my earlier posts, I discussed how to harness the power of Microsoft GRAPH API within custom connectors through delegated permission. In this article, I’ll delve into the step-by-step process of configuring service principal authentication in a custom connector for the Graph API with Application permissions to send emails. While I’ve chosen to focus on email communication, remember that you have the flexibility to opt for any of the supported Graph application permissions.

Setting up the Service Principal:

Let’s headover to the Microsoft Entra Admin center to register an AD App and grant the application permissions to send emails using the Graph API. Register an AD application with the following Application permission

Mail.Send: Send Mail as any user

Retrieve the Client ID & Tenant ID from the Overview section of the Azure AD app, and then proceed to generate a secret within the Certificates & secrets section under the Manage blade. Once the secret is generated, copy its value for use within the custom connector configuration. Add a Web Redirect URI https://global.consent.azure-apim.net/redirect as shown below

The Redirect URI is common and will be created while creating the custom connector.

Create Custom Connector:

With the service principal now created, let’s proceed to create the custom connector from the Power Apps maker portal. Choose the environment where you intend to create the custom connector. Navigate to Custom connectors on the left navigation menu, then click on + New custom connector and select Create from blank.

Once you’ve provided the connector name, you’ll be presented with the following screen. Enter graph.microsoft.com in the Host field and provide a brief description of the connector. Additionally, you have the option to customize the connector’s logo to your preference.

Now click Security at the lower-right corner of the above screen, which allows you to input the Azure AD application information for the service principal/App registration created earlier in the Entra Admin portal.

Here’s the step-by-step configuration:

  • Choose Authentication type as OAuth 2.0.
  • Change the Identity provider to Azure Active Directory.
  • Check the box Enable Service Principal support
  • Enter the Client ID and Client Secret from the Azure AD application.
  • Keep the Authorization URL as https://login.microsoftonline.com and Tenant ID as common.
  • Enter the Resource URL as https://graph.microsoft.com
  • For the Scope, specify Mail.Send based on the permissions you have added to the Azure AD app. If you have multiple permissions, separate them with spaces.

Once you’ve filled in this information, click Create connector. This action will automatically generate the Redirect URL as https://global.consent.azure-apim.net/redirect This URL should match the Redirect Web URI you previously added in the Azure AD application. With this configuration, your connector is now ready for the adding the actions based on the Graph API endpoints for sending emails.

Create Action to Send email:

With the connector successfully created, it’s time to create the action for sending emails. This action can be utilized in both Power Apps and Power Automate. The Graph API endpoint for sending emails is:

Http Request Mode: POST

Request URI: https://graph.microsoft.com/v1.0/users/{fromEmailAddress}/sendMail

The request parameter fromEmailAddress is to collect the information from the user while using the action

Request Body:

{
  "message": {
    "subject": "Mail sent using Custom Connector",
    "body": {
      "contentType": "Text",
      "content": "This is a sample email sent using Custom Connector which uses Service prinicipal"
    },
    "toRecipients": [
      {
        "emailAddress": {
          "address": "mailboxaddress@domain.com"
        }
      }
    ]
}
}

Proceed to the Definition tab of the Custom Connector. Here, select + New action, which will generate the following screen for you to enter information about the action.

After the Summary, Description and Operation ID is entered. Click + Import from sample under the Request section to the enter the Graph API endpoint request details as shown below

Click Import on the screen above. You can optionally provide a sample response by entering details in the default response section in the Add Action interface which will help you identify objects in Power Apps if the request has a response. For more information, please refer to my earlier blog post, which I have referenced in the introduction section. Don’t forget to update the connector.

Create Connection:

Once the connector with the Send Email action is set up, you can now proceed to test the action for sending emails. The first step is to create the connection, navigate to the below interface and click on + New connection under the section Test and then on the following popup select the Authentication Type as Service Principal Connection.

Enter the Client ID, Secret, and the Tenant ID you copied earlier to create the connection. You would now be able to test the action.

To use this in the Power Apps, after adding the connector, you would be able to call the action using the below code:

ServicePrinicpalSupport.SendEmail("fromEmailAddress@domain.com", {
        'message': {
            'subject': "Mail sent using Custom Connector from Power Apps",
            'body': {
                'contentType': "Text",
                'content': "Sample email sent from Custom Connector leveraging Service Principal"
            },
            'toRecipients': [
                {
                    'emailAddress': {
                        'address': "toUseraddrees@domain.com"
                    }
                }
            ]
        }
    });

The connections created uses the Authentication Type Explicit Authentication.

https://learn.microsoft.com/en-us/power-platform/admin/security/connect-data-sources#authenticating-to-data-sources

Sharing Connector:

When the app is shared with the user, the user will not be prompted to create a connection; instead, the consent window below will appear to allow the connection. You can use the PowerShell command Set-AdminPowerAppsApiToBypassConsent if you want to bypass consent for the app users. The connection is shareable, allowing you to share it for editing, using, sharing, etc., with other users.

Authentication Flow:

The authentication flow for custom connectors enabled with Service Principal uses the OAuth 2.0 client credentials flow, while for the custom connectors without Service Principal authentication, the OAuth 2.0 Authorization code flow is used. Below, you’ll find the Swagger details for the custom connector, showing both scenarios for connecting to Microsoft Graph using OAuth2 with Azure Active Directory

Swagger definition for Service Principal AuthenticationSwagger definition for Non Service Principal Authentication
securityDefinitions:   oauth2-auth:     type: oauth2     flow: accessCode     tokenUrl: https://login.windows.net/common/oauth2/authorize     scopes:       Mail.Send: Mail.Send     authorizationUrl: https://login.microsoftonline.com/common/oauth2/authorize   oAuthClientCredentials:     type: oauth2     flow: application     tokenUrl: https://login.windows.net/common/oauth2/authorize     scopes:       Mail.Send: Mail.Send security:   – oauth2-auth:       – Mail.Send   – oAuthClientCredentials:       – Mail.SendsecurityDefinitions:   oauth2-auth:     type: oauth2     flow: accessCode     tokenUrl: https://login.windows.net/common/oauth2/authorize     scopes:       Mail.Send: Mail.Send     authorizationUrl: https://login.microsoftonline.com/common/oauth2/authorize security:   – oauth2-auth:       – Mail.Send

Summary:

In this blog post, I have shown you how to use Service principal authentication in custom connector with application permissions to send an email through the Graph API. You can apply this feature for any supported Microsoft Graph Application permission such as SharePoint, Exchange, Teams, Azure AD, and more. It’s a game-changer, making automated processes smooth and user-free. Hope you have found this informational & thanks for reading. If you are visiting my blog for the first time, please do look at my other blogposts.

Do you like this article?

Subscribe to my blog with your email address using the widget on the right side or on the bottom of this page to have new articles sent directly to your inbox the moment I publish them.

Convert Speech to Text using OpenAI Whisper in Power Apps

OpenAI has released a new neural network called Whisper, which is an open-source model that can convert speech to text with impressive accuracy. This model is specifically designed to transcribe spoken language into text with high precision and speed, making it an ideal tool for a variety of applications, such as virtual assistants and video captioning. Whisper relies on advanced machine learning algorithms to analyze audio signals from multiple languages and convert them into written text. OpenAI has recently made API endpoints available to the public since March 1, 2023, allowing developers to easily integrate this powerful technology into their own applications.

The Speech to Text Open API can

  • Transcribe audio into whatever language the audio is in.
  • Translate and transcribe the audio into English.

As of the date I am writing this post, this model is not available in Azure. In this blog post, I will cover how to use the Microphone control and File Upload control to convert speech to text using the OpenAI Whisper API in a Power Automate flow.

Download Link to the Sample App: https://github.com/ashiqf/powerplatform/blob/main/OpenAI-SpeechtoText.msapp. Replace the API Key in the Power Automate flow HTTP Action Authorization Header.

OpenAPI Speech to Text API:

The speech to text API provides two endpoints, transcriptions and translations. At present, the maximum file size allowed for uploads is 25 MB and the supported audio formats are mp3, mp4, mpeg, mpga, m4a, wav, and webm. In this blog post, I utilized the Translation API to demonstrate its capability to convert English audio into text, it can understand other languages as well

POST https://api.openai.com/v1/audio/translations

If you have not yet created an API key, please sign up/login for OpenAI and obtain it from there.

Body:

Integration with Power Apps:

I have used a Power Automate flow with the Power Apps trigger to invoke the Speech to Text API via the HTTP connector in Power Automate. Alternatively, you can achieve the same outcome by constructing a Custom Connector. This sample app can be downloaded from this github link.

Microphone Control:

The audio control captures audio input through the device’s microphone and will be sent to the Power Automate flow for conversion into text using the Whisper API. The audio format of the recording depends on the type of device being used

  • 3gp format for Android.
  • AAC format for iOS.
  • Webm format for web browsers.

I’ve tested this control from the app accessed through the web browser. If you encounter an unsupported audio format for OpenAI, you can use utilities such as FFMpeg. Additionally, a .Net version of the control is available for download which can be used in Azure Function. John Liu (MVP) has written a sample Azure function that handles the conversion of audio formats using the aforementioned utility.

Step 1: To add a microphone control to the canvas, insert the Microphone control from the command bar. To preview the recorded audio from the Microphone control, add an Audio control

Step 2: Add a button to convert and to trigger the Power Automate flow. Find below the Power FX code

//Generates a JSON Text with the binary of the Audio file or Recorded audio
Set(varJson,JSON(Microphone1.Audio,JSONFormat.IncludeBinaryData));
Set(strB64Audio, Last(Split(varJson, ",")).Value);
Set(strB64AudioContent, Left(strB64Audio, Len(strB64Audio) - 1));
//Extract Audio Format
Set(varAudioFileType,Mid(varJson,Find(":",varJson)+1,Find(";",varJson)-Find(":",varJson)-1));
//Call the Power Automate Flow
Set(audioText,'SpeechtoText-OpenAIWhisper'.Run(strB64AudioContent,varAudioFileType).audiotext);

The Power FX code performs the following task

  • Stores the audio captured by a Microphone control in a variable as JSON data, including binary data.
  • Extracts the base64-encoded audio content from the JSON data using the string manipulation functions Split, Left, Mid.
  • Determines the audio file type by parsing a string variable.
  • Uses the extracted audio content and file type to call the Power Automate flow ‘SpeechtoText-OpenAIWhisper’ to obtain the corresponding text transcription which comes in later section of this post.
  • Assigns the resulting text transcription to a variable named ‘audioText’, this is assigned to a Text Label to display the converted text from the OpenAI Whisper API.

Step 3: Add a Label control to display the converted Text set to the variable audioText

File Upload Control

As of the day I am writing this post there is no file control that can handle all types of files in Power Apps, I have created a custom component utilizing the Attachment control to create a file attachment control. For further details, please refer to blogpost Uploading Files Made Easy: A Guide to Using the Attachment Control in Power Apps to add the control to the app.

Step 1: Add the file attachment control to the app from the component library. Set the input property for Maximum Attachments to 1 from the component.

Step 2: To extract the binary content of an audio file, add an Image control to the app. The Image control is capable of working with any type of file to extract its content.

Step 3: Add a Button control to convert the Audio from the uploaded file. Find the PowerFX below

//Generates a JSON Text with the binary of the Audio file using the Image control
Set(varFileContent,JSON(Image1.Image,JSONFormat.IncludeBinaryData));
//Extract Base64 content
Set(varExtractedFileContent,Last(Split(varFileContent,",")).Value);
//Remove the last character " from the string
Set(varExtractedFileContent,Left(varExtractedFileContent,Len(varExtractedFileContent)-1));
//Extract Audio Format
Set(varAudioFileType,Mid(varFileContent,Find(":",varFileContent)+1,Find(";",varFileContent)-Find(":",varFileContent)-1));
//Call the Power Automate Flow
Set(audioText,'SpeechtoText-OpenAIWhisper'.Run(varExtractedFileContent,varAudioFileType).audiotext);

Step 4: Add a Label control to display the converted Text set to the variable audioText

Power Automate Flow

Now, let’s create a Power Automate flow with the Trigger type Power Apps to invoke the OpenAI Whisper API and convert speech to text. Step 1: Add two compose action (input parameters) to receive the audio format and content from either the recorded audio captured by the Microphone control or the uploaded audio file from the file attachment control in the Power Apps

{
  "$content-type": @{outputs('Compose-AudioFormat')},
  "$content": @{triggerBody()['Compose-FileContent_Inputs']}
}

Step 2: Add a HTTP connector to make a request to the Whisper API endpoint. Refer to the blog post How to use form-data and form-urlencoded content type in Power Automate or Logic Apps HTTP action for handling multipart/form-data in the HTTP action

Request Body:

{
  "$content-type": "multipart/form-data",
  "$multipart": [
    {
      "headers": {
        "Content-Disposition": "form-data; name=\"model\""
      },
      "body": "whisper-1"
    },
    {
      "headers": {
        "Content-Disposition": "form-data; name=\"file\";filename=\"audiofile.webm\""
      },
      "body": @{outputs('Compose-FileContent')}
    }
  ]
}

Step 3: Add the Respond to a PowerApp or a flow action to pass the converted text back to the app. To get the converted text, use the following expression

body('HTTP-CallaOpenApiModel')['Text']

The expression was constructed based on the response of the Whisper API call. In the event that the response property changes in the future, please ensure to update the expression accordingly.

Summary:

In this post, I’ve outlined a step-by-step guide on how to develop a basic app with Speech to Text functionality using Power Apps and a Power Automate flow leveraging the OpenAI’s Whisper API. The possibilities for using this technology are endless, from creating virtual assistants to generating audio captions and translations. Furthermore, the Whisper API can also be used to transcribe video files, adding even more versatility to its capabilities. It’s worth noting that while Azure offers its own Speech to Text service, it currently does not rely on the OpenAI Whisper Model. However, it’s possible that the two services will eventually integrate in the future. Hope you have found this informational & thanks for reading. If you are visiting my blog for the first time, please do look at my other blogposts.

Do you like this article?

Subscribe to my blog with your email address using the widget on the right side or on the bottom of this page to have new articles sent directly to your inbox the moment I publish them.

How to copy an existing DLP Policy in Power Platform

DLP policies are essential in ensuring that data is managed uniformly across an organization, thereby preventing critical business data from being accidentally published to social media or other connectors. These policies can be created at both the tenant and environment levels, with management handled through the Power Platform admin center. However, it is currently not possible to copy an existing DLP policy from the Admin center. This limitation can create difficulties when there is a need to create new policies based on an existing one.

In this blog post, we will explore various options for copying existing DLP policies to streamline the process. By using these options, you can save time and effort when creating new policies based on existing ones.

  • Power Automate Flow
  • DLP Editor Power Apps from CoE starter kit app
  • Power Shell

Note: To create a DLP policy at the Tenant level, you must be a Power Platform or Global Administrator role in AD.

Power Automate Flow:

The Power Platform Connector for Admins, available in both Power Automate and Power Apps, offers a range of environment lifecycle management capabilities, including DLP policy management.

To copy an existing DLP Policy, we will be utilizing the action List DLP Policies and Create DLP Policy in a Button Flow

Step 1: In the trigger, create two parameters to get the input for the existing Policy Name and the New DLP Policy name followed with the action List DLP Policies from the connector Power Platform for Admins to list all the policies in the Organization

Step 2: To select the DLP policy that you want to copy in a Power Automate flow, add a Filter Array action. This action filters the DLP policies obtained from the List DLP Policies action based on a condition. Specifically, it checks whether the displayName of the DLP Policy from the DLP Policies list action matching with the trigger input Existing DLP Policy Name. Once the Filter Array action is executed, it returns a new array containing only the DLP policy that meets the condition. This filtered array can then be used as input for creating a New DLP policy

Step 3: Add the action Create DLP Policy from the Power Platform for Admins connector with the first property Display Name from the Trigger input. For the other input parameters for the action, use the expression from Output of the Filter Array action as shown below

body('Filter_array')[0]['defaultConnectorsClassification']
body('Filter_array')[0]['connectorGroups']
body('Filter_array')[0]['environmentType']
body('Filter_array')[0]['environments']

Save the changes to ensure that they are preserved. Once you have saved the flow, you can test it to make sure that it works as intended. I have the flow definition saved in my github if you wanted to take a copy of it.

CoE Starter Kit App:

The Center of Excellence (CoE) starter kit core components solution includes a Canvas app DLP Editor with a range of useful features to manage and administer DLP policies. One such feature is the ability to copy an existing Data Loss Prevention (DLP) policy, making it easy to replicate policies across multiple environments or tenants.

This app uses the Power Platform for Admins connector.

Power Shell:

Power Apps Administration PowerShell provides a convenient set of cmdlets that enable you to easily create and manage Data Loss Prevention (DLP) Policies. Microsoft has provided a helpful sample script that allows you to manage your tenant and environment policies. With this script, you can perform a wide range of tasks related to DLP policies, including creating new policies, reading existing policies, updating policies, and removing policies. The sample can be found here. By breaking down the sample script into manageable sections, you can gain a deeper understanding of how DLP policies work and how you can modify them to suit your organization’s needs with PowerShell.

Summary:

This blog post provides a overview of different methods that can be used to copy existing Data Loss Prevention (DLP) policies, which is currently not available from the Power Platform admin center. These techniques can help automate the DLP policy creation process, saving time and effort.

Do you like this article?

Subscribe to my blog with your email address using the widget on the right side or on the bottom of this page to have new articles sent directly to your inbox the moment I publish them.

Uploading Files Made Easy: A Guide to Using the Attachment Control in Power Apps

The Attachment control in Power Apps is a useful feature that allows users to upload and delete files, but it can only be used with data sources such as SharePoint List or Dataverse table. However, if you need to upload and delete files without using these data sources, you can create a custom component using the Attachment control or you can directly use this control in the app. I have followed the tip from Shane Young in this YouTube video to add the Attachment control to a component library.

By creating a custom component Library for the attachment control, you can upload and delete files similar to a Picture control but with the ability to handle any file type across any apps within an environment. This blog post is not a tutorial on how to create the component, but rather

  • How to use it
  • To Save the file in SharePoint Document Library using Power Automate Flow
  • How to customize the component to fit your needs.

How to use it – Add the Component to the Power Apps:

To incorporate this component into your app, you need to first import it into your environment. Please find below the steps to follow

Step 1: Download the component library from my github repo.

Step 2: Create a Blank Canvas App with a temporary name, on the studio command bar, click on the ellipsis > Click ‘Open’, browse to select the downloaded .msapp package. Save the App and then publish it. You would now be able to see the component from the Component Libraries.

Step 3: After following the instructions outlined in this documentation to import the Published component into your app, the component will be available for use in any app within the environment as shown below.

Step 4: Modify the input parameters of the component to adjust settings such as maximum number of attachments, border colour, attachment size, and other defined parameters of the component.

Step 5: To display the uploaded file content within the app or to send the file to a Power Automate flow, you can incorporate any of the following controls based on the file type:

In the Media Property of the control, the formula to display the file content is

First(FileAttachment_1.Attachments.FileAttachment).Value

The file content will be uploaded to the app as binary data with the URL appres://blobmanager/ for each file uploaded from the attachment control. To get the file Name:

First(FileAttachment_1.Attachments.FileAttachment).Name

Note: In the above screenshot, I have set the Max Attachments Component property to 1 in the Step 4

Send the File to Power Automate:

In order to send or store a file using a Power Automate flow, I needed to convert the file content to Base64 format. To accomplish this, I used a image control to capture the file content in binary format. Here is how I configured the image control:

This control works with any types of files to get the binary content.

After obtaining the binary content of the file using the JSON function, I performed some string manipulations to extract the binary content while excluding the Content-Type. Specifically, I used a combination of Split(), Left() and Last() functions to separate the content into an variable varExtractedFileContent.

Set(varFileContent,JSON(Image2.Image, JSONFormat.IncludeBinaryData));
Set(varExtractedFileContent, Last(Split(varFileContent, ",")).Value);
Set(varExtractedFileContent, Left(varExtractedFileContent, Len(varExtractedFileContent) - 1));

By performing these manipulations, I was able to extract the binary content of the file in a format that could be easily passed to a Power Automate flow or other API or action.

This allowed me to send the file to a Power Automate flow, which could then save the file in a SharePoint library or call some other API or action that required the data to be in Base64 format.

The Power Automate flow used to save the file to a SharePoint Document Library is simple. The flow consists of a Power Apps trigger and a SharePoint action Create File, which takes two input parameters: File Name and File Content.

I have used the base64toBinary() expression to convert the base64-encoded string to binary data. This expression is a prerequisite for the SharePoint create file action and ensures that the file is saved correctly to the SharePoint Document Library.

PowerFx to call the flow from Power Apps:

ProcessAttachments.Run(First(AttachmentComponent_1.Attachments.FileAttachment).Name,varExtractedFileContent);

If you need to upload multiple files to a library using the Attachment control, you can use Gallery control with the Image control, Collections, ForAll function, and the OnAddFile property from the Attachment control. First, create a collection to store the files that are uploaded using the Attachment control using the OnAddFile property. Then, use the Gallery control to load the binary of the uploaded files in the Image control. Next, use the ForAll function to iterate through each file in the gallery and call the Power Automate flow on a button click.

Customizing the Component:

The component I’ve created is a simple one for handling file attachments, but it does not have all the properties from the Attachment control. If you need more customization, you can easily modify it to suit your specific needs by adding additional input or output properties.

To add a new property, you can simply edit the component code and include the new property as an input or output parameter.

By customizing the component in this way, you can tailor it to your specific requirements and ensure that it meets all of your file attachment needs

Summary:

In summary, the Attachment control in Power Apps is a useful feature for uploading and deleting files, but it is limited to certain data sources. To work around this limitation, you can create a custom component using the Attachment control, which allows you to handle any file type and bypass the use of data sources like SharePoint or Dataverse tables. Hope you have found this informational & thanks for reading. If you are visiting my blog for the first time, please do look at my other blogposts.

Do you like this article?

Subscribe to my blog with your email address using the widget on the right side or on the bottom of this page to have new articles sent directly to your inbox the moment I publish them.

Changing Connections in Connection References on a Managed Solution

A connection reference is a component in a solution that holds information about a connector. It can be used by both a Canvas app and Power Automate flows. When importing a managed solution to an environment, the user is asked to either select an existing connection or create a new one. However, once a managed solution is imported, it cannot be edited as shown below

The solution to this is to use the Default Solution, which is a special solution that holds all the components within the environment.

Go to the Default Solution as shown below

To change a connection in a connection reference:

  1. Go to Connection references
  2. Select the connection reference you want to edit
  3. Click “Edit” button.

Change the connection and then click Save

This will update the connection to a new user.

Do you like this article?

Subscribe to my blog with your email address using the widget on the right side or on the bottom of this page to have new articles sent directly to your inbox the moment I publish them.

How to send an Adaptive card to a Microsoft Teams Private channel using Power Automate flow

Within Microsoft Teams, private channels create focused spaces for collaboration where only the owners or members of the private channel can access the channel. The Microsoft Teams connector in Power Automate has an action to Post an Adaptive card in a chat or channel, which posts an adaptive card as a flow bot to a specific Teams channel. The following error will appear if this action is used to post the card as a Flow bot in the Private channel

Request to the Bot framework failed with error: ‘{“error”:{“code”:”BotNotInConversationRoster”,”message”:”The bot is not part of the conversation roster.”}}’.

The above action will work if the Post as property in the action is changed to User but the creator of this connection has to be a member of the Private channel. This article shows how you can send an Adaptive card to a Private channel using incoming webhooks without being a member of the private channel

Create the Adaptive Card:

An adaptive card facilitates the exchange of UI content in a unified and consistent manner with a simple JSON without the complexity of customizing HTML or CSS. The adaptive card I have used in this example is created from the designer portal. Find below the JSON card payload

{
    "type": "AdaptiveCard",
    "body": [
        {
            "type": "TextBlock",
            "size": "Medium",
            "weight": "Bolder",
            "text": "Adaptive Card in a Private Channel"
        },
        {
            "type": "TextBlock",
            "text": "Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book",
            "wrap": true,
            "color": "Attention"
        }
    ],
    "actions": [
        {
            "type": "Action.OpenUrl",
            "title": "View",
            "url": "https://ashiqf.com"
        }
    ],
    "$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
    "version": "1.4"
}

Create Incoming Webhook on a Private Channel:

Incoming Webhooks allows external applications to share content within Microsoft Teams channels, in this case the cloud will be the external application sending an Adaptive card message to the private teams channel. You can add and configure an incoming webhook on a private channel by following the instructions on this link from Microsoft. Copy the Incoming webhook URL as mentioned in Step 6 from Microsoft documentation as shown below

Cloud Flow to send the Adaptive Card to a Private Teams channel:

The adaptive card JSON and the Incoming webhook is configured, lets create now create a flow with a HTTP action to send the Adaptive card

Step 1:

Form the HTTP request body for the HTTP action. Replace the Text with the JSON payload of the Adaptive card

{
  "type": "message",
  "attachments": [
    {
      "contentType": "application/vnd.microsoft.card.adaptive",
      "contentUrl": null,
      "content": 
	  Replace the ADAPTIVE CARD JSON PAYLOAD from the designer portal
    }
  ]
}

Step 2:

Add the HTTP action to the cloud flow with the following values against each parameter

Method: POST

URL: Incoming Webhook URLBody: from Step 1

Find below the adaptive card in the Private channel

Do you like this article?

Subscribe to my blog with your email address using the widget on the right side or on the bottom of this page to have new articles sent directly to your inbox the moment I publish them.