Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

How can I get HTTP requests to my local machine?

I have a cheap webhosting server (Google). Users fill out the form and then the form data is emailed to myself.

In my house, I have a second computer that uses an IDLE IMAP connection to get alerted to these emails. They are then processed by the custom software on this 2nd computer. I cannot perform the processing on Google's server due to it not being powerful enough, and my tasks cannot be done on a headless OS.

I just think this is kind of stupid.

I would've thought I could use some kind of HTTP request system but I can't figure it out. I would prefer not to open up my local network directly to the outside world. Is there not some way to stream the requests from Google's server to my home PC without using email? The current setup actually does work, I just feel like there should be a more elegant way than email.

How to prevent VSCode from automatically wrapping WebdriverIO and Mocha.js code in parentheses?

I am trying to write some tests using WebdriverIO and Mocha.js on VSCode, but when I write the code, VSCode automatically wraps the code with parentheses and it makes my code fail.

How can I close this feature?

When I write the code like this, it cannot find the element (because of the async mode I think.) I don't want these parentheses but VSCode automatically adds them

When I write the code like this, all the buttons are clicked without any problem. When I write the code like this, all the buttons are clicked without any problem.

Export Azure Dev Ops Wiki data

I have recently joined a company that has no Azure Dev Ops wiki, but they do have tons of documented processes which they keep within a sharepoint library. I've spoken to other members of the company and they've been trying to convince management to allow the use of a DevOps wiki for ages, to no avail.

No idea how i did it, but now we have permission to start using it to hold "how to's" in using dev ops, creating pipelines, managing work items etc, on the proviso that information about the wiki can be kept back on a sharepoint list somewhere as a directory.

What management want is:

Name (of wiki page) URL Last Updated

and every time a wiki is updated, it updates the directory.

This is so the wiki page can be accessed from the directory, as well as the current sharepoint document library.

I've had a look and there are lots of solutions on how to clone the wiki or create PDF's or word documents from it, but not the data of the wiki rather than the content itself.

Powerautomate doesn't seem to have any connections or triggers to the wiki, so i'm stuck.

Is this possible?

Accessing environment variable in github actions

I have a web application which has different environment like development, stage and prod. Each of the environment have specific URL like

  1. development:- my-app-dev.com
  2. stage:- my-app-stg.com
  3. prod:- my-app.com

I am building a github action which requires these URLs, I created environments under settings of my github repo and added variable name as URL in all 3 environments. Now, i want to access this URL basis of my environment selection during runner.

So far this is how i am selecting a environment

name: Run Playwright Tests on: workflow_dispatch: inputs: environment: type: environment description: Select the environment

Now how do i use the environment to access the variable inside it.

I tried this

`name: Run Playwright Tests on: workflow_dispatch: inputs: environment: type: environment description: Select the environment jobs: build: runs-on: uhg-runner-m

steps:
  - name: get-env-var
    run: echo "Environment is:$URL"`

I am expecting the output to be my-app-dev.com, if i select development during workflow dispatch

Here is the image of how i set environment variable enter image description here

Expected condition failed: waiting for visibility of element located by By.xpath:

public class ProgramDemoQA 
{

    public static WebDriver d;

    public static void main(String[] args) throws InterruptedException 
    {

        System.setProperty("webdriver.chrome.driver",
                "D:\\RamanaSoft\\PracticeCoding\\DemoQA\\drivers\\chromedriver.exe");
        d = new ChromeDriver();
        d.get("https://demoqa.com");
        d.manage().timeouts().implicitlyWait(Duration.ofSeconds(20));
        d.manage().window().maximize();
        
        
        
        d.findElement(By.xpath("//h5[text()='Elements']/../..")).click();
        d.findElement(By.xpath("//span[text()='Check Box']/..")).click();
        Thread.sleep(5000);
        d.findElement(By.xpath("//button[@aria-label=\'Toggle\']")).click();
        

        WebDriverWait w = new WebDriverWait(d,Duration.ofSeconds(10));
        w.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("//span[text()='Desktop']/../../button")));

        WebElement Desktop = d.findElement(By.xpath("//span[text()='Desktop']/../../button"));



        WebDriverWait w1 = new WebDriverWait(d,Duration.ofSeconds(60));
        w1.until(ExpectedConditions.visibilityOfElementLocated(By.xpath(
                "//span[text()='Notes']//ancestor::label")));


        WebElement Notes = d.findElement(By.xpath("//span[text()='Notes']//ancestor::label"));



        WebDriverWait w2 = new WebDriverWait(d,Duration.ofSeconds(50));
        w2.until(ExpectedConditions.visibilityOfElementLocated(By.xpath(
                "//span[text()='Commands']//ancestor::label")));


        WebElement Commands= d.findElement(By.xpath("//span[text()='Commands']//ancestor::label"));

        if(Desktop.isDisplayed())
        {     
            Desktop.click();
            //Notes.click();
            Commands.click();
            System.out.println("All clicked"); 
        } 

        if(Desktop.isDisplayed())
        {
            //Desktop.click();
            Commands.click();

            System.out.println("Only Notes clicked"); 
        }


        if(Desktop.isDisplayed()) 
        {
            //Desktop.click();
            //  Notes.click();
            System.out.println("Only Commands clicked"); 
        }

I'm trying to automate https://demoqa.com/checkbox clicked on toggle button of Home and tried to click Desktop it's throwing an exception as below Getting below:-

Expected condition failed: waiting for visibility of element located by By.xpath: //span[text()='Notes']//ancestor::label (tried for 60 second(s) with 500 milliseconds interval)

How can I use Indexing api with App Script and in Javascript?

I have a google developers account in google cloud console. I created a project, set up everything for it - key, credentials. I enabled the Indexing api. Now I need to connect it to App script and write the code there. However, all I can find is about doing with python.

I am not a programmer, please don't make fun of me. I'm just learning how to automate processes for SEO purposes.

I was sent a script that worked with google sheets and app script. I created a project in cloud console, set up the app script per instructions and now it's working. It extracts data in the sheet.

How do I set up the app script to work with the cloud console so I can start indexing urls?

Using a second Ansible server in azure

I’m a university student and we are getting started with service automation and getting to know Ansible. I’m really love creating playbooks and learning more about Ansible. But I have to automate an multitier network in azure. I have already created and tested my playbooks(on my local Ubuntu server) for creating my azure network, subnets, nsgs, multiple virtual machines. And i have my playbooks that i want to run on these created VMs. I will have 2 virtual machines with a public ip, 1 load balancer and 1 steppingstone server. And only the steppingstone server can be used with SSH. But how do I get to play my playbooks from my local Ansible server in my azure environment? Can I give this as a task when I create the VM or can i use a server with public ip in azure that I can use to send the right playbook to the right server?

I have done some research but I don’t know if I’m searching wrong but I can’t find how to start

Error with array and sed WP config replacement

newwpuser=$cpuser"_"$wpuser
newwpdb=$cpuser"_"$wpdb
wpdb=($(find . -name "wp-config.php" -print0 | xargs -0 -r grep -e "DB_NAME" | cut -d \' -f 4))
wpuser=($(find . -name "wp-config.php" -print0 | xargs -0 -r grep -e "DB_USER" | cut -d \' -f 4))
wpconfigchanges=($(find . -name wp-config.php -type f))

for i in "${wpconfigchanges[@]}"; do -exec sed -i -e "/DB_USER/s/'$wpuser'/'$newwpuser'/" | -exec sed -i -e "/DB_NAME/s/'$wpdb'/'$newwpuser'/"; done

I am trying to run the above in order to find all wordpress configs and append the db user and dbname with cpuser_

However - I get the following error;

./test.sh: line 85: -exec: command not found
./test.sh: line 85: -exec: command not found

Have I inputted the exec commands wrong?

CPUser is inputted on execution

Automate Emails for Attendance

We want to automate sending emails to people if they do not attend meetings for over two weeks. We have a list of attendees and their contact information, and we want to create a template to send via email and if possible, via text. The frequency will be every two weeks for people who have not been in attendance.

Is there a best way to go about this? Should we use a marketing tool like Survey Monkey?

We are just looking for information and ideas at this stage.

Unable to download PDF file using playwright with js because it's open in preview mode in Chrome browser

I am facing an issue where I am unable to download a PDF file because it opens in the preview mode in the Chrome browser i want to enable this option using playwright with JS so i can download directly without opening the preview

import { test, expect } from '@playwright/test';
const { chromium } = require('playwright');
test.only('test', async () => {

    const browser = await chromium.launch({
        args: ['--disable-pdf-extension'],
      });
    
      const context = await browser.newContext({
        acceptDownloads: true, // Enable automatic download handling
      });

  // Navigate to the page containing the PDF link
  const page = await context.newPage();
  await page.goto('https://example.com');
  await page.getByRole('textbox', { name: 'Email' }).fill('[email protected]');
  await page.getByRole('textbox', { name: 'Email' }).press('Tab');
  await page.getByPlaceholder('Password').fill('admin123');
  await page.getByPlaceholder('Password').press('Enter');
  await page.getByRole('link', { name: 'Qa testing Learning', exact: true }).click();
  await page.getByText('Subscribers').click();
  const pdfLink = page.getByRole('row', { name: 'pdf' }).getByRole('link');
  await pdfLink.click();

  await page.pause();
  
});

Importing a massive amount of cookies

I would like to import cookies with my profiles. How could I do that on a larger scale, since i have a lot of profiles?

One of them for example is:

Name: 2429 Cookies: [{"domain":".google.com","expirationDate":1756269164,"httpOnly":false,"name":"1P_JAR","path":"/","secure":true,"session":false,"value":"2023-08-14-09","sameSite":"no_restriction"},{"domain":".google.com","expirationDate":1707559018.287,"httpOnly":false,"name":"AEC","path":"/","secure":true,"session":false,"value":"Ad49MVFK4xJJRq2b9EBcL69RLW_Bd04C9377l4ztNJfMvOHr8l8KHd06Fw","sameSite":"lax"},{"domain":".wildberries.ru","expirationDate":1699955823.408,"httpOnly":false,"name":"BasketUID","path":"/","secure":true,"session":false,"value":"ea807926-b5be-4400-82ec-47dc42555435","sameSite":"lax"},{"domain":".google.com","expirationDate":1707818218.287,"httpOnly":false,"name":"NID","path":"/","secure":true,"session":false,"value":"511=a-fY-e7MJVYOiSRNMxls0RG3lkmQfRz2Nl8YzQRZgFXlngBZYAKcWv4lT-5pSUPABsz6OyhkRl75GlDth3CJfq-z9y2GXIoTRg1g-IXQ9GfziMj_7DyO5bW0vzme4uZwMNb2mPTKYLwrgGlaMPOrDp9t6NkgpJH8_x_KL2eJTqs","sameSite":"no_restriction"},{"domain":".wildberries.ru","expirationDate":1756269164,"httpOnly":false,"name":"wbs","path":"/","secure":true,"session":false,"value":"339aadf2-a949-4df1-aee9-3b206a40f61f.1692007029","sameSite":"lax"},{"domain":".wildberries.ru","expirationDate":1726567029.441,"httpOnly":false,"name":"wbu","path":"/","secure":true,"session":false,"value":"4a86c3cf-644c-4820-b755-923feb22701e.1692007029","sameSite":"lax"},{"domain":".wildberries.ru","expirationDate":1723543024,"httpOnly":false,"name":"wbauid","path":"/","secure":true,"session":false,"value":"9123079461692007024","sameSite":"lax"},{"domain":".wildberries.ru","expirationDate":1723543356.068,"httpOnly":false,"name":"WILDAUTHNEW_V3","path":"/","secure":true,"session":false,"value":"3C11CB17BE3BEE62CA29E5B73D672B8C44784EF440FA3FB753DFA171519A0D8A5F89C316298A846F53EF3AC2CFE94DAF0318C5E6FB328B895C7A1AB81EAD748CA45AD65BB1D26C08DAFC380FBC19EE0FF52F6A2CFF6AB31E5E83697C83A070E24F6CDCE83B104E3D150F8F1840F1702554A7E21763F992433310C40D86D0271EC6273F908846A6FD9F8005AAD45F707F04EE19F7E2FBD165FC57DE962DF1CA2BC3A60E9A8E46127C7BCC1084E34E43BA73938D245A53B336D07B53A4C19E74FE74E8D5F6DBEE64E4DB72426759EE61AD27BCEE35C119A905822740D2690BC10AB579BD1FC57E1C4EEE6F86DC56CD62B7CCFE997904A26617345AA0E76B947F6A30F2A0AF3343379D37AD9055AE5A52715DEB7C91026C99F449F766593C79B46C43F631440C29F2D543470A2345078E6476157ED8","sameSite":"lax"},{"domain":".wildberries.ru","expirationDate":1756269164,"httpOnly":false,"name":"um","path":"/","secure":true,"session":false,"value":"uid%3Dw7TDssOkw7PCu8KwwrPCs8KwwrHCtcK5wrPCuA%253d%253d%3Aproc%3D100%3Aehash%3Dd41d8cd98f00b204e9800998ecf8427e","sameSite":"lax"},{"domain":".wbx-auth.wildberries.ru","expirationDate":1723543358.378,"httpOnly":false,"name":"wbx-validation-key","path":"/v2/auth","secure":true,"session":false,"value":"a4bab65e-2e30-4b43-a69b-53b1ca001f96","sameSite":"lax"},{"domain":""_wba_s","path":"/","secure":false,"session":true,"value":"1","sameSite":"unspecified"},{"domain":"ru-basket-api.wildberries.ru","expirationDate":-11644473600,"httpOnly":true,"name":"routeb","path":"/","secure":false,"session":true,"value":"1696565586.54.333.934027|c4b1652c8c0c161d421c5b735e35af14","sameSite":"unspecified"},{"domain":".wbx-auth.wildberries.ru","expirationDate":1728616411.45763,"httpOnly":true,"name":"w..."}]

Automate Beyond Algorithms with PatternBuilder MAX

While ChatGPT and other “consumer grade” generative AI tools fail to meet the needs of legal users, AI still plays a powerful time-saving role in law firm workflows. AI represents a tremendous opportunity for legal professionals to scale up their work. 

Scott Kelly, senior manager at NetDocuments, spoke with Zack about PatternBuilder MAX, NetDocuments’ first product in its ndMAX suite of generative AI tools. According to Scott, the purpose of PatternBuilder MAX is to take generative AI and build a “fit to purpose tool for legal.” 

Secure AI for Law Firms 

Hosted on Microsoft’s Azure platform, PatternBuilder MAX respects data privacy and meets compliance requirements like data sovereignty and GDPR. PatternBuilder MAX then builds upon this “standard issue” NetDocuments security with AI-specific protections. 

  • No company uses data entered into PatternBuilder MAX to train an AI model; 
  • Microsoft employees do not monitor and cannot access PatternBuilder MAX data; and 
  • PatternBuilder MAX enforces a zero-day retention policy on any information entered into it. 

For these AI actions, no data enters the public sphere. The team designed PatternBuilder MAX to put the needs of legal professionals first. 

Time-Saving AI for Law Firms 

Having established PatternBuilder MAX’s security bona fides, Scott walked Zack through a real-world scenario where PatternBuilder MAX saves a legal user an impressive amount of time. Even simple use cases bring tremendous value when pairing generative AI with your documents. 

AI summarizes a deposition. 

Scott played the role of a firm associate who called to prepare a deposition summary memo. Traditionally, this task required reading hundreds of pages and manually extracting key data points and facts. The process could take weeks. 

PatternBuilder MAX’s Deposition Summarizer Studio App cuts weeks to minutes. The app replaces hours of slogging with three simple steps: 

  1. Upload the deposition from your computer or select it from within NetDocuments. 
  1. Pick how long you’d like the summary to be from a list of options. 
  1. The Deposition Summarizer app produces a summary for your review. It notes facts like the deponent and deposition date. Most importantly, and more impressively, it summarizes the deponent’s actual statements, complete with citations to the source text (e.g., page 72, lines 1-4). 

Scott demoed one deposition document, but he said you could upload an entire folder of files from your computer or point the app at a folder in NetDocuments. The Deposition Summarizer would do each deposition in the folder. 

PatternBuilder MAX is more than a solution for a fixed set of problems.

The software market, even narrowed to legal software, overflows with vendors proclaiming new AI products and solutions. That’s part of the problem. As Scott stated, so much AI technology in legal is “just a solution.” A company creates it to meet a single identified need or a set of needs, and any customization or enhancement happens at the developer’s whim. 

PatternBuilder MAX provides pre-built apps to address common use cases, but also makes the full set of tools available to firms and departments to customize those apps to meet their specific needs and even create powerful applications of their own, all in a no-code environment.

Scott built the Deposition Summarizer app in 30 minutes. The build process is simple, much of it point-and-click. The “magic” lies in the prompt one gives the AI. Think of the prompt, in Scott’s words, “as like a set of instructions to a smart intern.” 

The tools Scott used to create his app are available to all PatternBuilder MAX customers. PatternBuilder MAX provides templates and templated apps, like the Deposition Summarizer, that you can use and modify out of the box. 

Approachable AI for Law Firms 

One of the key benefits of PatternBuilder MAX is that it enables you to securely and responsibly leverage documents your firm or department already stores in NetDocuments. If you have templates, samples, or model documents, PatternBuilder MAX can use those as guidelines for what a new document it creates should look like. This is critical to driving accuracy and ensuring outputs from AI are valuable.

Scott’s example starts with a new commercial lease agreement and asks PatternBuilder MAX to create a lease summary that follows the firm’s model lease summary. One can state the prompt PatternBuilder MAX relies upon as “do this to that.” Take the summary sample and summarize this new commercial lease, filling in the data requested in the summary with data from the new lease received. 

Much of what you’d like to automate with traditional document automation is fact-intensive, making predetermined logic extremely difficult. But you can predetermine the instructions you would give a smart human: “Here are examples from the past. Here are the new facts.” 

PatternBuilder MAX, the first of NetDocuments’ suite of ndMAX tools, lets you do “this” to “that” in every area of law. 

Getting Started with AI for Law Firms 

Visit www.netdocuments.com to learn more about ndMAX and PatternBuilder MAX – plus see nine studio apps (pre-built apps included with PatternBuilder MAX) by requesting a demo. Some of the world’s largest law firms are utilizing PatternBuilder MAX and seeing incredible results. PatternBuilder MAX is now available globally.

The post Automate Beyond Algorithms with PatternBuilder MAX appeared first on Lawyerist.

💾

Scott Kelly, of NetDocs, walks Lawyerist through some practical PatterBuilder MAX artificial intelligence use cases.

Cypress GitHub Actions Build Fails with "Cannot find module" Error

I'm setting up Cypress tests in a GitHub Actions workflow, but I'm encountering an error during the build process. This is running correctly on my local machine. When I run npm run build, I receive the following error message:

[!] Error: Cannot find module '/home/runner/work/CucumberE2E/CucumberE2E/rollup.config.js' imported from /home/runner/work/CucumberE2E/CucumberE2E/node_modules/rollup/dist/shared/loadConfigFile.js

I'm using the Cypress GitHub Action cypress-io/github-action@v6 in my workflow, and I've encountered this issue. I've also tried updating my dependencies and changing the configuration file, but the problem persists.

OpenCV best practice when comparing multiple images

I have a small project that executes ADB commands and compares the devices output with different images via opencv.

Since there is more than 30 different resource pictures i have to compare the screen with, i am pretty sure i'm missing the best practice to do that more effortlessly.

def compare_output():
    
    source = device.screencap()

    with open('screen.png', 'wb') as f:
        f.write(source)

    source_img20 = cv2.imread('screen.png', cv2.IMREAD_COLOR)
    needle_img_ref = cv2.imread(ref_screen_img, cv2.IMREAD_COLOR)
    needle_img_final = cv2.imread(final_screen_img, cv2.IMREAD_COLOR)

    result_ref = cv2.matchTemplate(source_img20, needle_img_ref, cv2.TM_CCOEFF_NORMED)
    result_final = cv2.matchTemplate(source_img20, needle_img_final, cv2.TM_CCOEFF_NORMED)

    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(result_ref)
    print('> Result val referral:', max_val)
    if max_val >= threshold_referral :
        
        result_val += 1
        logged_in_val += 1

    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(result_final)
    print('> Result val new acc:', max_val)
    if max_val >= threshold_final :

        #do something  

In this example we can see i am comparing the images with the current screen, assign a variable according to the outcome, and then process that variable for further use.

So my questions is what is the best practice to do what i described above with less lines of code.

❌
❌