โŒ

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Maximum number of elements on list whose value sum up to at most K in O(log n)

I have this exercise to do:

Let M be a positive integer, and V = โŸจv1, . . . , vnโŸฉ an ordered vector where the value of item vi is 5ร—i.

Present an O(log(n)) algorithm that returns the maximum number of items from V that can be selected given that the sum of the selected items is less than or equal to M (repeated selection of items is not allowed).

First I did a naive solution where:

  • I know the sum of elements on the array will be always less than the M/5 index on the array. So a did for i=0..i<=M/5 and found the sum. Moreover this is not O(log(n)) because given a big M, bigger than the sum of all elements on the array, it will be O(n).

Therefore I tried divide and conquer, I thought a binary search should be the way. But actually no because if I do that I will sum the minimum elements that can be chosen to arrive in M, not the maximum. My code is below

   def max_items_selected_recursive2(M, V, left, right, max):

    if len(V[left:right]) == 1:
      return max
    

    mid =  math.floor((left+right)/2)
    if  V[mid] >= M:
        return max_items_selected_recursive2(M - V[mid], V, mid + 1, right, max+1)
    else:
        if  M - V[mid] >= 0:
          return max_items_selected_recursive2(M - V[mid], V, left, mid - 1, max+1)
        else: 
          return max_items_selected_recursive2(M, V, left, mid - 1, max)
   

example of call

M = 20
  V = [0, 5, 10, 15, 20]
  max_items_selected_recursive2(M, V, 0, len(V) - 1, 0) +1 # +1 since all have the O element

Any ideas on how to do this on O(log n)?

Similar to a coin change problem, but with "coin" repetitions and another optimization goal

The goal is to get all possible variants of sequences of positive integer numbers from a list L where each number can be used unlimited often (so repetitions are allowed) which if added together give a sum targetSum with the constraint that the amount of used numbers in the generated variant of the sequence is limited to the range between n and m (including n and m).

The code below is what I have came up with up to now, but it runs too slow for the target purpose of being part of an optimization problem:

def allArrangementsOfIntegerItemsInLsummingUpTOtargetSum(L, targetSum, n=None, m=None):
    if n is None:   n  = 1
    if m is None:   m = targetSum
    lenL = len(L)
    # print(f"{targetSum=}, {L=}, {n=}, {m=}")
    Stack           = []
    # Initialize the Stack with the starting point for each element in L
    for i in range(lenL):
        currentSum  =   L[ i ]
        path        = [   L[ i ]   ]
        start       = 0         # Start from 0 allows revisiting of all items
        Stack.append(   (currentSum, path, start )   )  

    while Stack:
        currentSum, path, start = Stack.pop()
        # Check if the current path meets the criteria
        if currentSum == targetSum and n <= len(path) <= m:
            yield path
        if currentSum > targetSum or len(path) > m:
            continue  
        # ^ - NEXT please: stop exploring this path as it's not valid or complete

        # Continue to build the path by adding elements from L, starting from 0 index
        for i in range(len(L)):  # Change start to 0 if you want to include all permutations
            newSum = currentSum + L[ i  ]
            newPath = path + [ L[ i  ]  ]
            Stack.append((newSum, newPath, 0))  # Start from 0 allows every possibility
# def allArrangementsOfIntegerItemsInLsummingUpTOtargetSum
splitsGenerator = allArrangementsOfIntegerItemsInLsummingUpTOtargetSum

Any idea of how to write code able to come up an order of magnitude faster with a result?

I searched the Internet already for weeks and found that all of the knapsack, coin change and dynamic programming based known optimization approaches are not covering such a basic task which special case is to divide a list into partitions with sizes ranging from a to b for the purpose of optimization of an overall weight function which uses values obtained from a local weight function calculating single weights out of the items in each of the partitions.

To give an example let's consider the case of list L = [ 13, 17, 23 ] and the targetSum = 30, with n=1 and m=30. The are two possible sequences to arrive at a sum of 30:

  • [ 13, 17 ]
  • [ 17, 13 ]

Let's consider a list L = [ 125, 126, 127, ... , 130 ] and a targetSum = 6000. The only possible sequence is here:

  • [ 125, 125, ... , 125 ] with a length of 48 elements

What I am interested in is another algorithm able to come up with the results much faster, so the programming language is secondary but preferring Python to describe the algorithm and to be tested against the provided code.

Union Find - Why we are checking Size for Weighted Quick Union

I was going through Princeton's Algorithms course on Coursera.

In the Union find section, for Weighted Quick union, we merge trees depending upon which tree is smaller in terms of Size.

enter image description here

However, I don't understand that why are we using Size rather than Depth to decide which tree is larger and which is smaller.

Wouldn't the Worst case time complexity of finding root increase because of tree Depth?

enter image description here

In the above example, if we merge the 2 trees by checking size, the depth of resulting tree is 4 whereas on checking by depth we get a smaller resulting tree depth.

JavaScript: Get the largest rectangle possible when the corners of the rectangle are cut

So we have a rectangle with, let's say dimensions of 120ft length, 72ft breadth and 20mm thickness (we assume thickness is negligible thus consider this cuboid a 2D rectangle). I want to use JavaScript to make a function, such that if any rectangular regions are cut from the corners of this rectangle, I can still find the largest chunk possible.

Here's my approach but it's wrong

class Rectangle {
        length: number;
        breadth: number;
        thickness: number;
        area: number;

        constructor(length: number, breadth: number, thickness: number) {
            this.length = length;
            this.breadth = breadth;
            this.thickness = thickness;
            this.area = length * breadth;
        }
    }

    const calculateUsableDimensions = (
        deductions: IDefectWidgetContent['deductions'],
        defaultDimensions: IDefectWidgetContent['defaultDimensions']
    ): Rectangle => {
        const { length: defaultLength, breadth: defaultBreadth, thickness } = defaultDimensions;

        deductions.sort(
            (a, b) => Number(b.length) * Number(b.breadth) - Number(a.length) * Number(a.breadth)
        );

        let lengthSide = 0;
        let breadthSide = 0;

        for (const deduction of deductions) {
            const { corner, length: deductionLength, breadth: deductionBreadth, thickness } = deduction;

            if (corner === 'A' || corner === 'B') {
                lengthSide += Number(deductionLength);
            } else {
                breadthSide += Number(deductionBreadth);
            }
        }

        const remainingLength = Number(defaultLength) - lengthSide;
        const remainingBreadth = Number(defaultBreadth) - breadthSide;

        return new Rectangle(remainingLength, remainingBreadth, 20); // Thickness is set to 20
    };

Here is the playground for this Playground | StoneTEKK Defects Visualizer

This is what a data of dimensions and deductions looks like

deductions: [
               { corner: 'A', length: 'l', breadth: 'b', thickness: '20' },
               { corner: 'B', length: 'l', breadth: 'b', thickness: '20' },
               { corner: 'C', length: 'l', breadth: 'b', thickness: '20' },
               { corner: 'D', length: 'l', breadth: 'b', thickness: '20' }
            ],
defaultDimensions: { length: '120', breadth: '72', thickness: '20' },

Assume all datatypes in int/Number. Also the visualizer gives accurate representation of the cut-outs. Only it does not calculate the cut-outs correctly and fail to subtract the deductions in a way it can take care of overlaps, and making the available biggest rectangle. Visualizer

Constraints Satisfaction Problem using CP-Sat in ortools

How can I create an assignment for tasks based on room, day, and time without any overlaps? The tasks also have specific consecutive time requirements.

Sample variable tasks = ['task1': 4, 'task2: 2, 'task3': 3,'task4': 1, 'task5': 2, 'task6': 2,'task7': 4, 'task8': 4, 'task9: 1','task10': 1, 'task11': 4, 'task12: 2, 'task13': 3,'task14': 1, 'task15': 2] #task with interval room = ['room1', 'room2'] day = ['Monday', 'Tuesday'] #the sized of the day is the range of the time meaning it is 12 time = ['7:00', '8:00', '9:00', '10:00', '11:00', '12:00', '13:00', '14:00', '15:00', '16:00', '17:00', '18:00']

it is open for other kind of approach. because my code is not working.

from ortools.sat.python import cp_model

def solve_task_scheduling(tasks, rooms, days, time_slots, day_time_range):
    model = cp_model.CpModel()

    num_tasks = len(tasks)
    num_rooms = len(rooms)
    num_days = len(days)
    num_time_slots = len(time_slots)

 
    start_times = [model.NewIntVar(0, sum(day_time_range) - tasks[task], f'start_{task}') for task in tasks]
    end_times = [model.NewIntVar(tasks[task], sum(day_time_range), f'end_{task}') for task in tasks]
    room_assignments = [model.NewIntVar(0, num_rooms - 1, f'room_{task}') for task in tasks]
    day_assignments = [model.NewIntVar(0, num_days - 1, f'day_{task}') for task in tasks]
    time_slot_assignments = [model.NewIntVar(0, num_time_slots - 1, f'time_{task}') for task in tasks]

  
    for i, task in enumerate(tasks):
        model.Add(end_times[i] - start_times[i] == tasks[task])

    for i in range(num_tasks):
        for j in range(i+1, num_tasks):
            model.AddBoolOr([
                room_assignments[i] != room_assignments[j],
                day_assignments[i] != day_assignments[j],
                time_slot_assignments[i] + tasks[tasks[i]] <= time_slot_assignments[j],
                time_slot_assignments[j] + tasks[tasks[j]] <= time_slot_assignments[i]
            ])

   
    objective_var = model.NewIntVar(0, sum(day_time_range) * num_tasks, 'objective')
    model.AddMaxEquality(objective_var, end_times)
    model.Minimize(objective_var)

  
    solver = cp_model.CpSolver()
    status = solver.Solve(model)

    if status == cp_model.OPTIMAL:
        result = {}
        for i, task in enumerate(tasks):
            room = rooms[room_assignments[i].Value()]
            day = days[day_assignments[i].Value()]
            start_time_index = time_slot_assignments[i].Value()
            end_time_index = start_time_index + tasks[task]
            time_slot = time_slots[start_time_index:end_time_index]
            result[task] = {'Room': room, 'Day': day, 'Time': time_slot}
        return result
    else:
        return None

this is the inputs

# Sample input
tasks = {
    'task1': 4, 'task2': 2, 'task3': 3, 'task4': 1, 'task5': 2,
    'task6': 2, 'task7': 4, 'task8': 4, 'task9': 1, 'task10': 1,
    'task11': 4, 'task12': 2, 'task13': 3, 'task14': 1, 'task15': 2
}
rooms = ['room1', 'room2']
days = ['Monday', 'Tuesday']
time_slots = ['7:00-8:00', '8:00-9:00', '9:00-10:00', '10:00-11:00', '11:00-12:00',
              '12:00-13:00', '13:00-14:00', '14:00-15:00', '15:00-16:00', '16:00-17:00',
              '17:00-18:00', '18:00-19:00']
day_time_range = [12, 12]  # Monday and Tuesday both have a time range of 12 hours

result = solve_task_scheduling(tasks, rooms, days, time_slots, day_time_range)

to pritn the result

if result:
    for task, details in result.items():
        print(f"{task} - {details['Room']}, {details['Day']}, {details['Time']}")

the output look like this:

task1 - room1, monday, ['7:00', '8:00', '9:00', '10:00']
task2 - room1, monday, ['11:00', '12:00']
task3 - room1, monday, ['13:00', '14:00', '15:00']
task4 - room1, monday, ['16:00']
task5 - room1, monday, ['16:00', '17:00', '18:00']
task6 - room1, Tuesday, ['7:00', '8:00']
task7 - room1, Tuesday, ['9:00', '10:00','11:00', '12:00']
task8 - room1, Tuesday, ['13:00', '14:00', '15:00', '16:00']
task9 - room1, Tuesday, ['17:00']
task10 - room1, Tuesday, ['18:00']
task11 - room2, monday, ['7:00', '8:00', '9:00', '10:00']
task12 - room2, monday, ['11:00', '12:00']
task13 - room2, monday, ['13:00', '14:00', '15:00']
task14 - room2, monday, ['16:00']
task15 - room2, monday, ['16:00', '17:00', '18:00']

but not exactly look like this

Are there techniques to mathematically compute the amount of searching in greedy graph searching?

I have developed a greedy graph search algorithm and made some optimizations. However, its performance varies depending on the parameters I use. I am keen to create a module that can compute the optimal parameter settings to enhance the efficiency of my algorithm. Are there any techniques available to precisely analyze the amount of searching involved in greedy graph searching (approximately exact)? I prefer not to rely on "time complexity" analysis as it may not provide sufficient precision for my specific problem.

I want to build a module that can compute the optimal parameter settings to enhance the efficiency of my algorithm.

Can this general implementation of STDIO for JavaScript be improved?

ECMA-262 doesn't specify STDIO. If you use more than one JavaScript engine or runtime regularly you'll notice that no two (2) JavaScript runtimes implement STDIO the same in the respective runtime-specific API's.

After some experimentation and testing I cobbled together this script that achieves the same result using the standalone JavaScript executables node, deno, bun.

The algorithm just happens to be for Native Messaging, where the protocol is

The same format is used to send messages in both directions; each message is serialized using JSON, UTF-8 encoded and is preceded with 32-bit message length in native byte order

Modern JavaScript runtimes, in general, have implmented WHATWG Streams to some extent.

Set aside the Uint32Array preceding the message and we can use the same code in multiple JavaScript runtimes for a common, portable STDIO implementation.

Can the following general STDIO implementation for JavaScript be improved?

/*
#!/usr/bin/env -S /home/user/bin/deno run -A /home/user/bin/nm_host.js
#!/usr/bin/env -S /home/user/bin/node --experimental-default-type=module /user/bin/nm_host.js
#!/usr/bin/env -S /home/user/bin/bun run --smol /home/user/bin/nm_host.js
*/

const runtime = navigator.userAgent;
// 1 MB formatted JSON, e.g., new Array(209715)
const buffer = new ArrayBuffer(0, { maxByteLength: 1024 ** 2 });
const view = new DataView(buffer);
const encoder = new TextEncoder();
const { dirname, filename, url } = import.meta;

let readable, writable, exit, args, cwd;

if (runtime.startsWith("Deno")) {
  ({ readable } = Deno.stdin);
  ({ writable } = Deno.stdout);
  ({ exit } = Deno);
  ({ args } = Deno);
}

if (runtime.startsWith("Node")) {
  const { Duplex } = await import("node:stream");
  ({ readable } = Duplex.toWeb(process.stdin));
  ({ writable } = Duplex.toWeb(process.stdout));
  ({ exit } = process);
  ({ argv: args } = process);
}

if (runtime.startsWith("Bun")) {
  readable = Bun.stdin.stream();
  writable = new WritableStream({
    async write(value) {
      await Bun.write(Bun.stdout, value);
    },
  }, new CountQueuingStrategy({ highWaterMark: Infinity }));
  ({ exit } = process);
  ({ argv: args } = Bun);
}

const hostdir = args.at(-2).slice(0, args.at(-2).lastIndexOf("/"));


if (runtime.startsWith("Bun") || runtime.startsWith("Node")) {
  process.chdir(hostdir);
  cwd = process.cwd();
}

if (runtime.startsWith("Deno")) {
  await Deno.chdir(hostdir);
  cwd = Deno.cwd();
}

function encodeMessage(message) {
  return encoder.encode(JSON.stringify(message));
}

async function* getMessage() {
  let messageLength = 0;
  let readOffset = 0;
  for await (let message of readable) {
    if (buffer.byteLength === 0) {
      buffer.resize(4);
      for (let i = 0; i < 4; i++) {
        view.setUint8(i, message[i]);
      }
      messageLength = view.getUint32(0, true);
      message = message.subarray(4);
      buffer.resize(0);
    }
    buffer.resize(buffer.byteLength + message.length);
    for (let i = 0; i < message.length; i++, readOffset++) {
      view.setUint8(readOffset, message[i]);
    }
    if (buffer.byteLength === messageLength) {
      yield new Uint8Array(buffer);
      messageLength = 0;
      readOffset = 0;
      buffer.resize(0);
    }
  }
}

async function sendMessage(message) {
  await new Blob([
    new Uint8Array(new Uint32Array([message.length]).buffer),
    message,
  ])
    .stream()
    .pipeTo(writable, { preventClose: true });
}

try {
  await sendMessage(encodeMessage([{ dirname, filename, url }, { cwd }, ...args]));
  for await (const message of getMessage()) {
    await sendMessage(message);
  }
} catch (e) {
  exit();
}

/*
export {
  args,
  encodeMessage,
  exit,
  getMessage,
  readable,
  sendMessage,
  writable,
};
*/

The simplest algorithm for poker hand evaluation

I am thinking about poker hand (5 cards) evaluation in Java. Now I am looking for simplicity and clarity rather than performance and efficiency. I probably can write a "naive" algorithm but it requires a lot of code.

I saw also a few poker evaluation libraries, which use hashing and bitwise operations, but they look rather complex.

What is the "cleanest and simplest" algorithm for poker hand evaluation ?

DS insertion in O(1) amortized and removal O(log n) amortized [closed]

I'm currently working on a project where I need to implement a custom data structure in C++ without using the Standard Template Library (STL), so I need to implement every data-structure I plan to use. The key requirements for this data structure are that it should support insertion operations with an amortized time complexity of O(1) and removal operations with an amortized time complexity of O(log n), the objects contain an ID only special for them. Initially, I attempted to combine hash tables with AVL trees, but this approach didn't meet the expected time complexity constraints, just to be clear I am looking for amortized time not average.

The context is a system that manages a collection of objects, which frequently undergoes operations of addition and removal. The challenge lies in achieving efficient insertion while also maintaining a reasonable time complexity for removals.

Considering these constraints, could you suggest alternative methods or improvements to my current approach that would help in achieving the desired time complexity for both insertion and removal operations? Are there specific data structure combinations or unique implementations that can be utilized in this scenario, especially when dealing with a non-STL environment in C++?

Leetcode 2161: Partition array according to given pivot

I'm currently solving LeetCode problem 2161. Partition Array According to Given Pivot:

You are given a 0-indexed integer array nums and an integer pivot. Rearrange nums such that the following conditions are satisfied:

  • Every element less than pivot appears before every element greater than pivot.

  • Every element equal to pivot appears in between the elements less than and greater than pivot.

  • The relative order of the elements less than pivot and the elements greater than pivot is maintained.

    • More formally, consider every pi, pj where pi is the new position of the ith element and pj is the new position of the jth element. For elements less than pivot, if i < j and nums[i] < pivot and nums[j] < pivot, then pi < pj. Similarly for elements greater than pivot, if i < j and nums[i] > pivot and nums[j] > pivot, then pi < pj.

Return nums after the rearrangement.

Example:

Input: nums = [9,12,5,10,14,3,10], pivot = 10
Output: [9,5,3,10,10,12,14]

My solution:

A solution with O(n) memory is trivial. I tried to come up with a O(1) memory solution and approached a similar to insertion sort way:

public int[] pivotArray(int[] nums, int pivot) {
    for(int i = 0; i < nums.length; i++){
        if(nums[i] < pivot){
            for(int j = i; j > 0 && nums[j - 1] >= pivot; j--){
                int tmp = nums[j];
                nums[j] = nums[j - 1];
                nums[j - 1] = tmp;
            }
        }
        if(nums[i] == pivot){
            for(int j = i; j > 0 && nums[j - 1] > pivot; j--){
                int tmp = nums[j];
                nums[j] = nums[j - 1];
                nums[j - 1] = tmp;
            }
        }
    }
    return nums;
}

The issue is the solution has O(n^2) time complexity. Is there an algorithm to solve this faster than O(n^2)? I'm not sure if O(n) time and O(1) space complexity is possible though.

Why you shouldn't use online compilers (original: C++ function returning different values for the same input) [closed]

Original question: I was solving a problem where multiple "cases" are given in a single "input" to be solved independently. However, I noticed a strange phenomenon where my code returns a different answer for the same "case" depending on the previous "case".

My code looks like this:

#include <iostream>
#include <stdlib.h>
#include <algorithm>
#include <vector>
using namespace std;

bool cmp(pair<int, int> a, pair<int, int> b) {
    return a.first < b.first;
}

 void solve() {
    int N, Q, C;
    cin >> N >> Q >> C;

    vector<int> val(N+1);
    for (int i = 0; i < N; i++) {
        cin >> val[i];
    }

    vector<pair<int, int> > pairs, fin;
    for (int i = 0; i < Q; i++) {
        int a, b;
        cin >> a >> b;
        a--; b--;
        pairs.push_back(make_pair(a, b));
    }
    sort(pairs.begin(), pairs.end(), cmp);
    fin.push_back(pairs[0]);
    for (int i = 1; i < Q; i++) {
        if (pairs[i].first < pairs[i-1].second) {
            if (pairs[i].second != pairs[i-1].second) {
                cout << -1 << '\n';
                return;
            }
        } else {
            fin.push_back(pairs[i]);
        }
    }
    int maxim = 0, curlastz = 0, lastz = 0, cur = 0, le = fin.size();
    for (int i = 0; i < N; i++) {
        if (val[i] == 0) {
            val[i] = 1;
            lastz = i;
        }
        cur = min(cur, le);
        if (i <= fin[cur].first) {
            maxim = max(maxim, val[i]);
            if (i == lastz) {
                val[i] = 1;
                curlastz = i;
            }
        } else if (i < fin[cur].second) {
            if (val[i] > maxim) {
                maxim = val[i];
                val[curlastz] = maxim;
            } else if (val[i] == 0) {
                val[i] = 1;
            }
        } else if (i == fin[cur].second) {
            if (i != lastz) {
                if (val[i] <= maxim) {
                    cout << -1 << '\n';
                    cout << i << maxim;
                    return;
                }
                maxim = val[i];
            } else {
                maxim++;
                val[i] = maxim;
            }
            curlastz = lastz;
            cur++;
        } else if (i > fin[cur].second) {
            if (val[i] == 0) val[i] = 1;
        }
    }
    for (int i = 0; i < N; i++) {
        if (val[i] > C) {
            cout << -1 << '\n';
            return;
        }
    }
    maxim = val[0]; cur = 0;
    for (int i = 0; i < N; i++) {
        cur = min(cur, le);
        if (i <= fin[cur].first) {
            maxim = max(maxim, val[i]);
        } else if (i < fin[cur].second) {
            if (val[i] > maxim) {
                cout << -1 << '\n';
                return;
            }
        } else if (i == fin[cur].second) {
            if (val[i] <= maxim) {
                cout << -1 << '\n';
                return;
            }
            maxim = val[i];
            cur++;
        } else if (i > fin[cur].second) {
            break;
        }
    }
    cout << val[0];
    for (int i = 1; i < N; i++) {
        cout << ' ' << val[i];
    }
    cout << '\n';
}

int main() {
    ios_base::sync_with_stdio(false);
    cin.tie(NULL);
    cout.tie(NULL);

    int T;
    cin >> T;
    for (int cases = 0; cases < T; cases++) {
        solve();
    }
}

According to my understanding, in this case, the solve() function should be reset every time, with new input values. However, for some reason, my code keeps outputting different values for the same test "case" depending on what "case" comes before it (from what I can tell by testing out different test inputs). Why is this happening and what can I do to fix it?

INPUT CASES

1st "input"

1
10 1 5
0 0 0 0 0 0 0 0 0 0
1 2

The 1 at the top indicates that only one "case" is provided. My output was:

1 2 1 1 1 1 1 1 1 1

which was expected.

However, when the input changes to

2
10 2 8
1 0 0 0 0 5 7 0 0 0
4 6
6 9
10 1 5
0 0 0 0 0 0 0 0 0 0
1 2

my output becomes

-1
1 2 1 1 1 1 1 1 3 1

The first -1 is expected, but notice how the second line has a three which was not present when it was the only "case" tested.


I have found out that this problem only occurs in the online compiler that I was using, and works just fine in VS Code. I guess I learned my lesson to always try to use 'legitimate' compilers from now on :)

โŒ
โŒ