โŒ

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Mapping Element of List into string element in Azure AI Search

I am using Azure AI Search and want to map the list output from my custom skill to an index element after having used the AzureOpenAI Skill.

E.g. I have an output from my custom skill where I chunk my text which looks like this

{ 
"values": 
      [ 
       { "recordId": "0", 
         "data": { 
                  "identifier": [ [1,2], [1,2] ], 
                  "text": ["Hello", "world"], 
                  "pagenumber": [1,2]}
                 }
       }
      ]
 }

How can I map the values from the data into scalars for my index? The output for the example above should look like this

{
    { 
      "id: "0", 
      "identifier": [1,2], 
      "text": "Hello", 
      "embedding": [0.2, 0.3], 
      "pagenumber": 1
    },
    {
      "id: "1",
      "identifier": [1,2],
      "text": "World",
      "embedding": [0.3, 0.3],
      "pagenumber": 2

    }
}

I tried out the indexProjections in the Skillset, but I cannot map the list into scalars.

Azure yaml pipeline using "extends"

I am trying to use extends as part of my pipeline. I am trying to get the first basic step working from Azure docs ie

    # pythonparameter-template.yml
    parameters:
    - name: usersteps
      type: stepList
      default: []
    steps:
    - ${{ each step in parameters.usersteps }}
      - ${{ step }}

# azure-pipelines.yml
trigger: none

resources:
  repositories:
  - repository: CI-CD-Templates
    type: git
    name: /CI-CD-Templates
    ref: refs/heads/master

extends:
  template: /pythonparameter-template.yml@CI-CD-Templates
  parameters:
    usersteps:
    - script: echo This is my first step
    - script: echo This is my second step

I keep getting the below error:

The directive 'each' is not allowed in this context. Directives are not supported for expressions that are embedded within a string. Directives are only supported when the entire value is an expression Unexpected value '${{ each step in parameters.usersteps }} - ${{ step }}'

Also after I extend from a template can azure-pipelines.yml also have it's own custom steps ie

# azure-pipelines.yml
resources:
  repositories:
  - repository: templates
    type: git
    name: MyProject/MyTemplates
    ref: tags/v1

extends:
  template: template.yml@templates
  parameters:
    usersteps:
    - script: echo This is my first step
    - script: echo This is my second step
steps:
- template: /validation-template.yml@CI-CD-Templates
 parameters:
  commitMessage: $(commitMessage)

- template: /shared-template.yml@CI-CD-Templates
 parameters:
 buildArtifactDir: $(buildArtifactDir)

Unable to create CosmosDB database via go sdk

From Mac OS X, using the emulator for Azure CosmosDB running on a Docker container locally, I am able to use the explorer web portal to create a database. However, I am unable to use the Azure Go SDK to create a database, but the errors don't present that there was an issue creating the db.

Besides this, there are multiple SDKs and a lot of the documentation has mistakes in it, is there a canonical source where I can see a functioning Golang example of utilizing the CosmosDB emulator?


cred, err := azcosmos.NewKeyCredential(cosmosDbKey)
    handle(err)
    client, err := azcosmos.NewClientWithKey(cosmosDbEndpoint, cred, nil)
    handle(err)

    databaseProperties := azcosmos.DatabaseProperties{ID: "databaseName"}
    databaseResponse, err := client.CreateDatabase(context.Background(), databaseProperties, nil)

How can I get better visibility into what is going on here, the client is able to create even if I pass in empty strings instead of the proper key and endpoint.

Tried to use the Go SDK to create a database, was expecting it to appear in the emulator portal. I would also expect NewClientWithKey() to fail when the credentials are invalid, this is not the case.

Azure Virtual Network Encryption

What is Azure Virtual Network Encryption? Azure Virtual Network encryption provides a layer of security that encrypts virtual network traffic, specifically between Azure Virtual Machines that communicate securely within a subnet or across different subnets.

Save log files and exports from Azure function to Azure storage

I would like to Save my logs file, Json file and Txt file generated from Function App python script to Azure storage. My current code looks like this

import logging logging.basicConfig( format='%(asctime)s %(levelname)-8s %(message)s', level=logging.INFO, filename = 'Log.txt', datefmt='%Y-%m-%d %H:%M:%S')`

def main(mytimer: func.TimerRequest,outputblob: func.Out[bytes]) -> None: # codes with logging

function.json file: { "scriptFile": "__init__.py", "bindings": [ { "name": "mytimer", "type": "timerTrigger", "direction": "in", "schedule": "0 0 20 * * *" }, { "name": "outputblob", "type": "blob", "dataType": "binary", "path": "test/log.txt",#is this the path to my storage account? "connection": "MyStorageConnectionAppSetting",#is this just a naming or it is reference to anything? "direction": "out" } ] } `

Would Like to save my logs file, Json file and Txt file generated from Azure function to Azure storage

Update records in a table using azure function time trigger in nodejs

I want to update the reocrds in a table. query is like

UPDATE TutorAds SET Status = 'expired', Published_At = NULL WHERE Published_At < DATEADD(DAY, -7, GETDATE())

But I could not find good example of updating records after 1 day.

I am following this url https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-azure-sql-output?tabs=isolated-process%2Cnodejs-v4&pivots=programming-language-javascript#http-trigger-write-records-to-table-javascript

Previosuly I have tried to do like this, but it only update the table record when I run the function locally. But after deployment, it is not changing DB|

here is my function code in VSCODE

const { app } = require('@azure/functions');
const { Connection, Request } = require('tedious');
require('dotenv').config();

app.timer('newtriggerexpre', {
    schedule: '0 */2 * * * *',
    handler: (myTimer, context) => {
        const config = {
            "server": process.env.SQL_SERVER,
            "authentication": {
                "type": "default",
                "options": {
                    "userName": process.env.SQL_USERNAME,
                    "password": process.env.SQL_PASSWORD
                }
            },
            "options": {
                "port": 1433,
                "database": "Tutoringacademy",
                "trustServerCertificate": true
            }
        }

        const connection = new Connection(config);

        // Connect to SQL Server
        connection.on('connect', async function (err) {
            if (err) {
                context.log.error(err);
                return;
            }

            try {
                // Calculate the date 7 days ago
                const sevenDaysAgo = new Date();
                sevenDaysAgo.setDate(sevenDaysAgo.getDate() - 7);

                // Execute update query for records older than 7 days
                const request = new Request(`
                UPDATE TutorAds
                SET Status = 'expired', Published_At = NULL
                WHERE Published_At < DATEADD(DAY, -7, GETDATE())
                `, function (err, rowCount) {
                    if (err) {
                        context.log.error(err);
                        return;
                    }

                    context.log(`${rowCount} rows were updated`);
                });

                request.addParameter('sevenDaysAgo', TYPES.DateTime, sevenDaysAgo);

                connection.execSql(request);
            } catch (err) {
                context.log.error(err);
            }
        });
    }
});


Download multiple azure blobs asynchronously

I am trying to improve the speed of downloading blobs from Azure. Using the package examples from azure, I have created my own example. However, it only works for a single file. I want to be able to pass in multiple customers in the form of a customer_id_list(commented out below) so that the files can be downloaded at the same time. However, I am unsure how to scale the aiohttp code to achieve this.

import asyncio
from azure.storage.blob.aio import BlobServiceClient

async def download_blob_to_file(blob_service_client: BlobServiceClient, container_name, transaction_date, customer_id):
    blob_client = blob_service_client.get_blob_client(container=container_name, blob=f"{transaction_date}/{customer_id}.csv")
    with open(file=f'{customer_id}.csv', mode="wb") as sample_blob:
        download_stream = await blob_client.download_blob()
        data = await download_stream.readall()
        sample_blob.write(data)


async def main(transaction_date, customer_id):
    connect_str = "connection-string"
    blob_serv_client = BlobServiceClient.from_connection_string(connect_str)

    async with blob_serv_client as blob_service_client:
        await download_blob_to_file(blob_service_client, "sample-container", transaction_date, customer_id)

if __name__ == '__main__':
    transaction_date = '20240409'
    customer_id = '001'
    # customer_id_list = ['001', '002', '003', '004']
    asyncio.run(main(transaction_date, customer_id))

Output Azure Cosmos DB for PostgreSQL connection string with Terraform

I am currently trying to deploy an Azure Cosmos DB PostgreSQL cluster with Terraform.

resource "azurerm_cosmosdb_postgresql_cluster" "example" {
  name                            = "example-cluster"
  resource_group_name             = var.resource_group_name
  location                        = var.resource_group_location
  administrator_login_password    = ""
  coordinator_storage_quota_in_mb = 131072
  coordinator_vcore_count         = 2
  node_count                      = 0
}

It works but I wanted to know if there is a way to output the connection string of the databases at the end of the deployment so I can use an SQL init script to create all the schemas and tables needed ?

I'am pretty new to Terraform and I have read the "Attributes Reference" of the documentations but don't if I am right.

I also already read this topic but it does not word since the "connection_strings" field does not exists in my case.

Thanks for your help !

Problem with automate refresh Azure App Configuration with Spring Boot and enabled global method security

I have a Spring Boot (2.6.6) application which is connected to the Azure App Configuration. I also use the automated refresh mechanisms from the Azure App Configuration, which works fine - I change a value in Azure App Configuration and also update the sentinel value, then the application detects the changes and update the bean values (properties) by calling the setter methods without restarting the application.

But after I added a class with @EnableGlobalMethodSecurity(prePostEnabled = true), then the refresh mechanisms is not working anymore. Only during startup, the values are set, then never again. I see in the log that the application detects the changes, but never calls the setter of the bean to update the values.

How can I solve this problem to have the automated refresh mechanisms and the PreAuthorize working together?

MethodSecurityConfiguration.class:

@Configuration
@AllArgsConstructor
@EnableGlobalMethodSecurity(prePostEnabled = true)
public class MethodSecurityConfiguration extends GlobalMethodSecurityConfiguration {
...
}



ConfigurationProperties.class:

public interface AppConfigurationProperties {

    void setExample(String example);

    String getExample();
}



AzureAppConfigurationProperties.class:

@Configuration
@ConfigurationProperties(prefix = "config")
public class AzureAppConfigurationProperties implements AppConfigurationProperties {

    private String example;

    @Override
    public void setExample(String example) {
        this.example = example;
    }

    @Override
    public String getExample() {
        return this.example;
    }
}

Error: Resource type '/microsoft.operationalinsights/workspace' is invalid for property 'properties.workspaceId'

I'm trying to create a function app and update the diagnostic settings through PowerShell script to existing Log Analytics Workspace that is residing in different subscription. However throwing error while updating with below error,

ERROR: Resource type '/microsoft.operationalinsights/workspace' is invalid for property'properties.workspaceId'. Expected types are 'microsoft.operationalinsights/workspaces.

It has something to do with the workspace ID as I noticed hardcoding the workspace ID successfully retrieves and update the diagnostic settings.

PowerShell Script:

[CmdLetBinding()]

param(

[string] $SubscriptionId,
[string] $BusinessFunAppRGName,
[string] $BusinessFunAppName,
[string] $laWorkspaceRGName,
[string] $laWorkspaceName,
[string] $lawSubscriptionId

)

$ResourceId = "/subscriptions/$SubscriptionId/resourceGroups/$BusinessFunAppRGName/providers/Microsoft.Web/sites/$BusinessFunAppName"

$diagSettings = Get-AzDiagnosticSetting -ResourceId $ResourceId

if($diagSettings) { Write-Host "Diagnostics Settings Found" } else { Write-Host "Diagnostics Settings Not Found, Applying settings..."

$metric = @()
$log = @()
$metric += New-AzDiagnosticSettingMetricSettingsObject -Enabled $true -Category AllMetrics
$log += New-AzDiagnosticSettingLogSettingsObject -Enabled $true -Category FunctionAppLogs

New-AzDiagnosticSetting -Name "DiagnosticSettings" `
    -ResourceId $ResourceId `
    -Log $log -Metric $metric `
    -WorkspaceId "/subscriptions/$lawSubscriptionId/resourceGroups/$laWorkspaceRGName/providers/microsoft.operationalinsights/workspaces/$laWorkspaceName" `


                  
        
Write-Host "Done"

}

Workflow file sample:

    - name: 'Update diagnostics settings' 
  uses: azure/powershell@v1
  with:
      azPSVersion: "latest"
      inlineScript: |

        arm-templates/business-subscription-afna/update-diagnostics.ps1 `
          -SubscriptionId ${{ github.event.inputs.businessSubscriptionId }} `
          -BusinessFunAppRGName ${env:parameters.resourceGroup.value} `
          -BusinessFunAppName ${env:parameters.funcAppName.value} `
          -laWorkspaceRGName ${env:parameters.mgmtlaWorkspaceRGName.value} `
          -laWorkspaceName ${env:parameters.mgmtlaWorkspaceName.value} `
          -lawSubscriptionId ${env:parameters.mgmtlawSubscriptionId.value} 
      failOnStandardError: $true

Azure pipeline set npm version variable and use in Git tag

I'm trying to get the npm version from package.json and use it to tag my git repo, but seems it's not working because the variable seems to be "preloaded" before it is set.

I've created a commandline task that just do this:

npmVersion=$(node -p "require('./package.json').version")
echo '##vso[task.setvariable variable= npmVersion]$npmVersion'

Then, in the Get sources, I've set my repo and branch and tag format like this:

enter image description here

But the output tags is like this:

enter image description here

How could I do it?

Thanks

Azure Communication Service for video call - function to know if incomingCall is out of time

I'm using Azure Communication Service to create video calls between 2 people (person 1 and person 2).

Person 1 logs in and waits for person 2 to log in. When person 2 connects person 1 is notified by the "incomingCall" function

callAgent.on('incomingCall', async (args) => {
    console.log("-------- ON incomingCall ------------");
    try {
            incomingCall = args.incomingCall;
            acceptCallPop.style.display = "block";
            ring();
    } catch (error) {
            console.error("incomingCall ERROR");
            console.error(error);
    }
});

Person 1 can therefore accept the call with the "acceptCall" button. But if person 1 takes too long to accept (around 30s), the call is destroyed and if person 1 clicks on "acceptCall" there is an error.

I manage to activate a function for person 2 who can therefore know that the call is destroyed but I cannot find how to make person 1 aware when the call is destroyed.

How can person 1 know that the call is destroyed in real time ?

How can i delete an azure sql server firewall rule using az cli?

I am trying to delete a rule in firewall settings for an az sql server but it gives an error

i am using the following :

az sql server firewall delete --firewall-rule-name $ruleName --server-name $serverName -g    $resourceGroupName > $null

but it doesnt delete the rule and says --firewall-rule-name is misspelt.

how can i delete a rule using az cli ? I want to delete all rules in one go

Azure Bicep - Configure additonal Defender for Cloud settings - [Containers]

Currently trying to set different Defender for Cloud specific settings through templates in Bicep. So far Ive been able to configure a lot of settings as recourses in Bicep.

For instance, enabling Agentless Scanning for Servers:

resource serversPlan 'Microsoft.Security/pricings@2022-03-01' = {
  name: 'VirtualMachines'
  dependsOn: [
    cspmPlan
  ]
  properties: {
    pricingTier: 'Standard'
    subPlan: 'P2'
  }
}

resource agentlessScanningServers 'Microsoft.Security/VmScanners@2022-03-01-preview' = {
  name: 'default'
  dependsOn: [
      serversPlan
  ]
  properties: {
    scanningMode: 'Default'
    // You can add exclusion tags to Agentless scanning for machines feature
    exclusionTags: {}
    enabled: true
  }

But I am having a hard time finding any documentation regarding enabling/disabling Defender for Cloud settings regarding Containers. More specifically:

  • Agentless discovery for Kubernetes
  • Agentless container vulnerability assessment

Has anyone been doing any configuration for specific settings in DfC for Containers in Bicep? And is it even possible?

Looked through the entirety of the https://learn.microsoft.com/en-us/azure/templates/ in the Security tabs. Unable to find anything regarding containers.

Azure API Management Policy: Unable to add condition to check it the string is empty

I'm adding policy for APIs, I have many APIs onboarded and want to add API specific policies. I am adding rate limit as per the API owner request sent in a config file, Some APIs provide rate limit and some not. At the same time APIs use different authontication too(JWT ot subscription_id). I want to add condition in policy that if rate limit is given by API owner and API is validated by jwt/subscription_id then add given rate limit if not then dont do anything. I tried below code

<policies>
     <inbound>
       <set-variable name="isjwt" value="@(context.Request.Headers.GetValueOrDefault('Authorization', '').AsJwt()?.Subject)" />
       <set-variable name="isSubId" value="@(context.Subscription.Id)" />
           
       <choose>
          <when condition="(${var.api.rate_limit} != "" and (context.Variables["isjwt"] != null or context.Variables["isSubId"] != null))">
              <rate-limit-by-key calls=${var.api.rate_limit} renewal-period="60" counter-key="@(context.Variables["isjwt"] ?? context.Variables["isSubId"])" />
           </when>
        </choose>
       </inbound>
       <backend>
       <base />
       </backend>
       <outbound>
       <base />
       </outbound>
       <on-error>
       <base />
       </on-error>
      </policies>
               
               
variable "api" {
    type = object({
    name = string
    rate_limit = string
    })
}

The terraform plan looks like this

      +         <set-variable name="isAuthenticated" value="@(context.Request.Headers.GetValueOrDefault('Authorization', '').AsJwt()?.Subject)" />
  +         <set-variable name="hasSubscriptionId" value="@(context.Subscription.Id)" />
  + 
  +         <choose>
  +          <when condition="( != ""  and (context.Variables["isjwt"] != null or context.Variables["isSubId"] != null))">
  +               <rate-limit-by-key calls="" renewal-period="60" counter-key="@(context.Variables["isjwt"] ?? context.Variables["isSubId"])" />
  +           </when>
  +         </choose>

I get error at condition var.api.rate_limit != "" I tried adding single quates it doesn't help,

Error: creating/updating Api Policy: (Policy Name "xml" / Api Name "APINAME" / Service Name "SERVICENAME" / Resource Group "RGNAME"): apimanagement.APIPolicyClient#CreateOrUpdate: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="ValidationError" Message="One or more fields contain incorrect values:" Details=[{"code":"ValidationError","message":"Name cannot begin with the '"' character, hexadecimal value 0x22. Line 36, position 52.","target":"representation"}]

What is the URL for New Page in Azure DevOps Wiki? / How to bookmark New Page?

Is there a URL for New Page in Azure DevOps Wiki?

Specifically one where I can specify the path that it will create the new page in.

I know that if you wanted to bookmark the link to view a specific page you'd usually need to know the page ID, E.G. 22 in this link:

https://dev.azure.com/incredibleit/Incredible%20IT/_wiki/wikis/Incredible-IT.wiki/22/Backups-Restores

However there is a "secret link" you can use to specify the path instead, when the page ID is unknown:

https://dev.azure.com/incredibleit/Incredible%20IT/_wiki/wikis/Incredible-IT.wiki?pagePath=/Home/Technical/Important/Backups%20%26%20Restores

And to edit a page at a specified path it's:

https://dev.azure.com/incredibleit/Incredible%20IT/_wiki/wikis/Incredible-IT.wiki?wikiVersion=GBwikiMaster&_a=edit&pagePath=/Home/Technical/Important/Backups%20%26%20Restores

I tried modifying that last URL from &_a=edit to &_a=new and &a_=add but neither worked.

How can I link to New Page / New Subpage for a specified path?

When I click New Page from within DevOps Wiki, the URL in the address bar does not change.

This is for a larger project that automatically builds Sphinx Docs () from an Azure DevOps Wiki. I'd like the ability to have a New button in Sphinx. Currently I've managed to add "View on Wiki" and "Edit on Wiki" buttons to each page, along with a few more that are not Wiki-related - and I'm looking to add "New page" to it... but essentially my question boils down: what is the URL for New page at a specified path?

Here's what each page looks like in Sphinx:

I checked the event listeners on the New Page button in DevOps Wiki but from what I saw it links to an empty function called we()

If you're interested: I'm using to auto-build Sphinx from the Wiki each time an edit is made, and the HTML of each page gets customised by some scripts I've written, largely using .

โŒ
โŒ