Azure Stack HCI ARM deployment parameter value
I am trying to deploy a 3-Node Azure Stack HCI via ARM template. Does anyone know which value the parameter "intentList" expects in detail?
I am trying to deploy a 3-Node Azure Stack HCI via ARM template. Does anyone know which value the parameter "intentList" expects in detail?
As the title says, I have to find the remaining queens for the nqueens problem, but I have not use recursive calls and I have to use stacks and backtracking. Currently I have:
#include <stdio.h>
#include <stdbool.h>
#define MAX_N 11
typedef struct {
int row;
int col;
} Queen;
typedef struct {
int top;
Queen items[MAX_N];
} Stack;
void initStack(Stack* stack) {
stack->top = -1;
}
void push(Stack* stack, Queen queen) {
if (stack->top < MAX_N - 1) {
stack->top++;
stack->items[stack->top] = queen;
}
}
Queen pop(Stack* stack) {
if (stack->top >= 0) {
stack->top--;
return stack->items[stack->top];
}
Queen emptyQueen = { -1, -1 }; // Return an invalid queen
return emptyQueen;
}
// Helper function to check if a queen can be placed at (row, col)
bool isValid(Queen queens[], int numQueens, int row, int col) {
for (int i = 0; i < numQueens; i++) {
if (queens[i].row == row || queens[i].col == col ||
queens[i].row - queens[i].col == row - col ||
queens[i].row + queens[i].col == row + col) {
return false; // Queens attack each other
}
}
return true;
}
void solveQueens(int max_queens, Queen* initQs, int numInitQ) {
Queen queens[MAX_N];
Stack stack;
initStack(&stack);
// Initialize with initial queens
for (int i = 0; i < numInitQ; i++) {
queens[i] = initQs[i];
}
int numQueens = numInitQ;
int row = numInitQ; // Start from the next row
while (numQueens < max_queens) {
bool found = false;
for (int col = 0; col < max_queens; col++) {
if (isValid(queens, numQueens, row, col)) {
queens[numQueens] = (Queen){ row, col };
numQueens++;
found = true;
break;
}
}
if (!found) { //backtrack, pop the queen
queens[numQueens - 1] = pop(&stack);
numQueens--;
row = queens[numQueens - 1].row + 1;
if(numQueens <= numInitQ){ //there are not enough queens in total, therefore no solution
printf("no solution\n");
return;
}
} else {
push(&stack, queens[numQueens - 1]);
row++;
}
}
// Print the solution
for (int i = 0; i < numQueens; i++) {
printf("%d %d\n", queens[i].row, queens[i].col);
}
}
for testing, I have used the main function:
int main() {
Queen initialQueens[] = { {0, 0} }; // Example initial queens
int numInitialQueens = 1;
int maxQueens = 4; // Change this to the board size
solveQueens(maxQueens, initialQueens, numInitialQueens);
return 0;
}
Which expectedly prints No Solution. However, when I try to make the board size 5 (setting maxQueens to 5) the function enters an infinite loop. My theory is that this is probably caused by the function finding a valid queen, but the total number of queens is not enough, which causes it to backtrack repeatedly. Don't take my word for the error, I could be way off but it might be a lead. Anyone got any fixes and suggestions?
Why is the variable used as an argument in fmt.Print escaped? Also, why is the variable used as an argument in print not escaped?
package main
import "fmt"
func main() {
a := 1
fmt.Print(a) // a escapes to heap
print(a) // a doesn't escape to heap
}
$ go build -gcflags="-m -m -l" main.go
# command-line-arguments
./main.go:7:12: a escapes to heap:
./main.go:7:12: flow: {storage for ... argument} = &{storage for a}:
./main.go:7:12: from a (spill) at ./main.go:7:12
./main.go:7:12: from ... argument (slice-literal-element) at ./main.go:7:11
./main.go:7:12: flow: {heap} = {storage for ... argument}:
./main.go:7:12: from ... argument (spill) at ./main.go:7:11
./main.go:7:12: from fmt.Print(... argument...) (call parameter) at ./main.go:7:11
./main.go:7:11: ... argument does not escape
./main.go:7:12: a escapes to heap
We are using the redis-rejson in the production kubernetes environment with the HA setup using sentinel.now we are planning migrate to redis-stack-server in the production with HA setup.
There are HA Helm chart for the redis https://artifacthub.io/packages/helm/dandydev-charts/redis-ha
. I have try to update the image from the redis to redis-stack-server but it is not running.
Anyone having an idea on how to create Sankey diagrams that the edges has gradient color from left rect's color to right rect's color?
This is sample Sankey diagram in Kibana:
And this is my code: https://gist.github.com/s1031432/fdaf4bdbed15f1f1179317dbb93c985d (I'm sorry that the code is so long)
{
$schema: https://vega.github.io/schema/vega/v5.json
data: [
{
// query ES based on the currently selected time range and filter string
name: rawData
url: {
%context%: true
%timefield%: timestamp
index: kibana_sample_data_logs
body: {
size: 0
aggs: {
table: {
composite: {
size: 10000
sources: [
{
stk1: {
terms: {field: "machine.os.keyword"}
}
}
{
stk2: {
terms: {field: "geo.dest"}
}
}
]
}
}
}
}
}
// From the result, take just the data we are interested in
format: {property: "aggregations.table.buckets"}
// Convert key.stk1 -> stk1 for simpler access below
transform: [
{type: "formula", expr: "datum.key.stk1", as: "stk1"}
{type: "formula", expr: "datum.key.stk2", as: "stk2"}
{type: "formula", expr: "datum.doc_count", as: "size"}
]
}
{
name: nodes
source: rawData
transform: [
// when a country is selected, filter out unrelated data
{
type: filter
expr: !groupSelector || groupSelector.stk1 == datum.stk1 || groupSelector.stk2 == datum.stk2
}
// Set new key for later lookups - identifies each node
{type: "formula", expr: "datum.stk1+datum.stk2", as: "key"}
// instead of each table row, create two new rows,
// one for the source (stack=stk1) and one for destination node (stack=stk2).
// The country code stored in stk1 and stk2 fields is placed into grpId field.
{
type: fold
fields: ["stk1", "stk2"]
as: ["stack", "grpId"]
}
// Create a sortkey, different for stk1 and stk2 stacks.
{
type: formula
expr: datum.stack == 'stk1' ? datum.stk1+datum.stk2 : datum.stk2+datum.stk1
as: sortField
}
// Calculate y0 and y1 positions for stacking nodes one on top of the other,
// independently for each stack, and ensuring they are in the proper order,
// alphabetical from the top (reversed on the y axis)
{
type: stack
groupby: ["stack"]
sort: {field: "sortField", order: "descending"}
field: size
}
// calculate vertical center point for each node, used to draw edges
{type: "formula", expr: "(datum.y0+datum.y1)/2", as: "yc"}
]
}
{
name: groups
source: nodes
transform: [
// combine all nodes into country groups, summing up the doc counts
{
type: aggregate
groupby: ["stack", "grpId"]
fields: ["size"]
ops: ["sum"]
as: ["total"]
}
// re-calculate the stacking y0,y1 values
{
type: stack
groupby: ["stack"]
sort: {field: "grpId", order: "descending"}
field: total
}
// project y0 and y1 values to screen coordinates
// doing it once here instead of doing it several times in marks
{type: "formula", expr: "scale('y', datum.y0)", as: "scaledY0"}
{type: "formula", expr: "scale('y', datum.y1)", as: "scaledY1"}
// boolean flag if the label should be on the right of the stack
{type: "formula", expr: "datum.stack == 'stk1'", as: "rightLabel"}
// Calculate traffic percentage for this country using "y" scale
// domain upper bound, which represents the total traffic
{
type: formula
expr: datum.total/domain('y')[1]
as: percentage
}
]
}
{
name: dgroups
source: nodes
transform: [
// combine all nodes into country groups, summing up the doc counts
{
type: aggregate
groupby: ["stack", "grpId"]
fields: ["size"]
ops: ["sum"]
as: ["total"]
}
// re-calculate the stacking y0,y1 values
{
type: stack
groupby: ["stack"]
sort: {field: "grpId", order: "descending"}
field: total
}
// project y0 and y1 values to screen coordinates
// doing it once here instead of doing it several times in marks
{type: "formula", expr: "scale('y', datum.y0)", as: "scaledY0"}
{type: "formula", expr: "scale('y', datum.y1)", as: "scaledY1"}
// boolean flag if the label should be on the right of the stack
{type: "formula", expr: "datum.stack == 'stk2'", as: "rightLabel"}
// Calculate traffic percentage for this country using "y" scale
// domain upper bound, which represents the total traffic
{
type: formula
expr: datum.total/domain('y')[1]
as: percentage
}
]
}
{
// This is a temp lookup table with all the 'stk2' stack nodes
name: destinationNodes
source: nodes
transform: [
{type: "filter", expr: "datum.stack == 'stk2'"}
]
}
{
name: edges
source: nodes
transform: [
// we only want nodes from the left stack
{type: "filter", expr: "datum.stack == 'stk1'"}
// find corresponding node from the right stack, keep it as "target"
{
type: lookup
from: destinationNodes
key: key
fields: ["key"]
as: ["target"]
}
// calculate SVG link path between stk1 and stk2 stacks for the node pair
{
type: linkpath
orient: horizontal
shape: diagonal
sourceY: {expr: "scale('y', datum.yc)"}
sourceX: {expr: "scale('x', 'stk1') + bandwidth('x')"}
targetY: {expr: "scale('y', datum.target.yc)"}
targetX: {expr: "scale('x', 'stk2')"}
}
// A little trick to calculate the thickness of the line.
// The value needs to be the same as the hight of the node, but scaling
// size to screen's height gives inversed value because screen's Y
// coordinate goes from the top to the bottom, whereas the graph's Y=0
// is at the bottom. So subtracting scaled doc count from screen height
// (which is the "lower" bound of the "y" scale) gives us the right value
{
type: formula
expr: range('y')[0]-scale('y', datum.size)
as: strokeWidth
}
// Tooltip needs individual link's percentage of all traffic
{
type: formula
expr: datum.size/domain('y')[1]
as: percentage
}
]
}
]
scales: [
{
// calculates horizontal stack positioning
name: x
type: band
range: width
domain: ["stk1", "stk2"]
paddingOuter: 0
paddingInner: 0.96
}
{
// this scale goes up as high as the highest y1 value of all nodes
name: y
type: linear
range: height
domain: {data: "nodes", field: "y1"}
}
{
// use rawData to ensure the colors stay the same when clicking.
name: color
type: ordinal
range: category
domain: {data: "rawData", field: "stk1"}
}
{
// use rawData to ensure the colors stay the same when clicking.
name: dcolor
type: ordinal
range: category
domain: {data: "rawData", field: "stk2"}
}
{
// this scale is used to map internal ids (stk1, stk2) to stack names
name: stackNames
type: ordinal
range: ["Source", "Destination"]
domain: ["stk1", "stk2"]
}
]
axes: [
{
// x axis should use custom label formatting to print proper stack names
orient: bottom
scale: x
encode: {
labels: {
update: {
text: {scale: "stackNames", field: "value"}
}
}
}
}
{orient: "left", scale: "y"}
]
marks: [
{
// draw the connecting line between stacks
type: path
name: edgeMark
from: {data: "edges"}
// this prevents some autosizing issues with large strokeWidth for paths
clip: true
encode: {
update: {
// By default use color of the left node, except when showing traffic
// from just one country, in which case use destination color.
stroke: [
{
test: groupSelector && groupSelector.stack=='stk1'
scale: color
field: stk2
}
{scale: "color", field: "stk1"}
]
strokeWidth: {field: "strokeWidth"}
path: {field: "path"}
// when showing all traffic, and hovering over a country,
// highlight the traffic from that country.
strokeOpacity: {
signal: !groupSelector && (groupHover.stk1 == datum.stk1 || groupHover.stk2 == datum.stk2) ? 0.9 : 0.3
}
// Ensure that the hover-selected edges show on top
zindex: {
signal: !groupSelector && (groupHover.stk1 == datum.stk1 || groupHover.stk2 == datum.stk2) ? 1 : 0
}
// format tooltip string
tooltip: {
signal: datum.stk1 + ' โ ' + datum.stk2 + ' ' + format(datum.size, ',.0f') + ' (' + format(datum.percentage, '.1%') + ')'
}
}
// Simple mouseover highlighting of a single line
hover: {
strokeOpacity: {value: 1}
}
}
}
{
type: rect
name: dgroupMark
from: {data: "dgroups"}
encode: {
enter: {
fill: {scale: "dcolor", field: "grpId"}
stroke: {value: "#888"}
strokeWidth: {value: 0.5}
width: {scale: "x", band: 1}
}
update: {
x: {scale: "x", field: "stack"}
y: {field: "scaledY0"}
y2: {field: "scaledY1"}
fillOpacity: {value: 0.6}
tooltip: {
signal: datum.grpId + ' ' + format(datum.total, ',.0f') + ' (' + format(datum.percentage, '.1%') + ')'
}
}
hover: {
fillOpacity: {value: 1}
}
}
}
{
// draw stack groups (countries)
type: rect
name: groupMark
from: {data: "groups"}
encode: {
enter: {
fill: {scale: "color", field: "grpId"}
width: {scale: "x", band: 1}
}
update: {
x: {scale: "x", field: "stack"}
y: {field: "scaledY0"}
y2: {field: "scaledY1"}
fillOpacity: {value: 0.6}
tooltip: {
signal: datum.grpId + ' ' + format(datum.total, ',.0f') + ' (' + format(datum.percentage, '.1%') + ')'
}
}
hover: {
fillOpacity: {value: 1}
}
}
}
{
// draw country code labels on the inner side of the stack
type: text
from: {data: "groups"}
// don't process events for the labels - otherwise line mouseover is unclean
interactive: false
encode: {
update: {
// depending on which stack it is, position x with some padding
x: {
signal: scale('x', datum.stack) + (datum.rightLabel ? bandwidth('x') + 8 : -8)
}
// middle of the group
yc: {signal: "(datum.scaledY0 + datum.scaledY1)/2"}
align: {signal: "datum.rightLabel ? 'left' : 'right'"}
baseline: {value: "middle"}
fontWeight: {value: "bold"}
// only show text label if the group's height is large enough
text: {signal: "abs(datum.scaledY0-datum.scaledY1) > 13 ? datum.grpId : ''"}
}
}
}
{
// Create a "show all" button. Shown only when a country is selected.
type: group
data: [
// We need to make the button show only when groupSelector signal is true.
// Each mark is drawn as many times as there are elements in the backing data.
// Which means that if values list is empty, it will not be drawn.
// Here I create a data source with one empty object, and filter that list
// based on the signal value. This can only be done in a group.
{
name: dataForShowAll
values: [{}]
transform: [{type: "filter", expr: "groupSelector"}]
}
]
// Set button size and positioning
encode: {
enter: {
xc: {signal: "width/2"}
y: {value: 30}
width: {value: 80}
height: {value: 30}
}
}
marks: [
{
// This group is shown as a button with rounded corners.
type: group
// mark name allows signal capturing
name: groupReset
// Only shows button if dataForShowAll has values.
from: {data: "dataForShowAll"}
encode: {
enter: {
cornerRadius: {value: 6}
fill: {value: "#F5F7FA"}
stroke: {value: "#c1c1c1"}
strokeWidth: {value: 2}
// use parent group's size
height: {
field: {group: "height"}
}
width: {
field: {group: "width"}
}
}
update: {
// groups are transparent by default
opacity: {value: 1}
}
hover: {
opacity: {value: 0.7}
}
}
marks: [
{
type: text
// if true, it will prevent clicking on the button when over text.
interactive: false
encode: {
enter: {
// center text in the paren group
xc: {
field: {group: "width"}
mult: 0.5
}
yc: {
field: {group: "height"}
mult: 0.5
offset: 2
}
align: {value: "center"}
baseline: {value: "middle"}
fontWeight: {value: "bold"}
text: {value: "Show All"}
}
}
}
]
}
]
}
]
signals: [
{
// used to highlight traffic to/from the same country
name: groupHover
value: {}
on: [
{
events: @groupMark:mouseover
update: "{stk1:datum.stack=='stk1' && datum.grpId, stk2:datum.stack=='stk2' && datum.grpId}"
}
{events: "mouseout", update: "{}"}
]
}
{
// used to highlight traffic to/from the same country
name: dgroupHover
value: {}
on: [
{
events: @dgroupMark:mouseover
update: "{stk2:datum.grpId=='stk2' && datum.grpId, stk1:datum.grpId=='stk1' && datum.stack}"
}
{events: "mouseout", update: "{}"}
]
}
// used to filter only the data related to the selected country
{
name: groupSelector
value: false
on: [
{
// Clicking groupMark sets this signal to the filter values
events: @groupMark:click!
update: "{stack:datum.stack, stk1:datum.stack=='stk1' && datum.grpId, stk2:datum.stack=='stk2' && datum.grpId}"
}
{
// Clicking "show all" button, or double-clicking anywhere resets it
events: [
{type: "click", markname: "groupReset"}
{type: "dblclick"}
]
update: "false"
}
]
}
]
}
For example: I want the color of edge from ios(left) to CN(right) is green to purple rather than the edge is only green. Thank you so much.
I'm getting an error with StreamIdentifier when trying to use MultiStreamTracker in a kinesis consumer application.
java.lang.IllegalArgumentException: Unable to deserialize StreamIdentifier from first-stream-name
What is causing this error? I can't find a good example of using the tracker with kinesis.
The stream name works when using a consumer with a single stream so I'm not sure what is happening. It looks like the consumer is trying to parse the accountId
and streamCreationEpoch
. But when I create the identifiers I am using the singleStreamInstance
method. Is the stream name required to have these values? They appear to be optional from the code.
This test is part of a complete example on github.
package kinesis.localstack.example;
import java.nio.ByteBuffer;
import java.nio.charset.StandardCharsets;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.UUID;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
import com.amazonaws.services.kinesis.producer.KinesisProducer;
import com.amazonaws.services.kinesis.producer.KinesisProducerConfiguration;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.testcontainers.containers.localstack.LocalStackContainer;
import org.testcontainers.junit.jupiter.Container;
import org.testcontainers.junit.jupiter.Testcontainers;
import org.testcontainers.utility.DockerImageName;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.cloudwatch.CloudWatchAsyncClient;
import software.amazon.awssdk.services.dynamodb.DynamoDbAsyncClient;
import software.amazon.awssdk.services.kinesis.KinesisAsyncClient;
import software.amazon.kinesis.common.ConfigsBuilder;
import software.amazon.kinesis.common.InitialPositionInStream;
import software.amazon.kinesis.common.InitialPositionInStreamExtended;
import software.amazon.kinesis.common.KinesisClientUtil;
import software.amazon.kinesis.common.StreamConfig;
import software.amazon.kinesis.common.StreamIdentifier;
import software.amazon.kinesis.coordinator.Scheduler;
import software.amazon.kinesis.exceptions.InvalidStateException;
import software.amazon.kinesis.exceptions.ShutdownException;
import software.amazon.kinesis.lifecycle.events.InitializationInput;
import software.amazon.kinesis.lifecycle.events.LeaseLostInput;
import software.amazon.kinesis.lifecycle.events.ProcessRecordsInput;
import software.amazon.kinesis.lifecycle.events.ShardEndedInput;
import software.amazon.kinesis.lifecycle.events.ShutdownRequestedInput;
import software.amazon.kinesis.processor.FormerStreamsLeasesDeletionStrategy;
import software.amazon.kinesis.processor.FormerStreamsLeasesDeletionStrategy.NoLeaseDeletionStrategy;
import software.amazon.kinesis.processor.MultiStreamTracker;
import software.amazon.kinesis.processor.ShardRecordProcessor;
import software.amazon.kinesis.processor.ShardRecordProcessorFactory;
import software.amazon.kinesis.retrieval.KinesisClientRecord;
import software.amazon.kinesis.retrieval.polling.PollingConfig;
import static java.util.stream.Collectors.toList;
import static org.assertj.core.api.Assertions.assertThat;
import static org.awaitility.Awaitility.await;
import static org.testcontainers.containers.localstack.LocalStackContainer.Service.CLOUDWATCH;
import static org.testcontainers.containers.localstack.LocalStackContainer.Service.DYNAMODB;
import static org.testcontainers.containers.localstack.LocalStackContainer.Service.KINESIS;
import static software.amazon.kinesis.common.InitialPositionInStream.TRIM_HORIZON;
import static software.amazon.kinesis.common.StreamIdentifier.singleStreamInstance;
@Testcontainers
public class KinesisMultiStreamTest {
static class TestProcessorFactory implements ShardRecordProcessorFactory {
private final TestKinesisRecordService service;
public TestProcessorFactory(TestKinesisRecordService service) {
this.service = service;
}
@Override
public ShardRecordProcessor shardRecordProcessor() {
throw new UnsupportedOperationException("must have streamIdentifier");
}
public ShardRecordProcessor shardRecordProcessor(StreamIdentifier streamIdentifier) {
return new TestRecordProcessor(service, streamIdentifier);
}
}
static class TestRecordProcessor implements ShardRecordProcessor {
public final TestKinesisRecordService service;
public final StreamIdentifier streamIdentifier;
public TestRecordProcessor(TestKinesisRecordService service, StreamIdentifier streamIdentifier) {
this.service = service;
this.streamIdentifier = streamIdentifier;
}
@Override
public void initialize(InitializationInput initializationInput) {
}
@Override
public void processRecords(ProcessRecordsInput processRecordsInput) {
service.addRecord(streamIdentifier, processRecordsInput);
}
@Override
public void leaseLost(LeaseLostInput leaseLostInput) {
}
@Override
public void shardEnded(ShardEndedInput shardEndedInput) {
try {
shardEndedInput.checkpointer().checkpoint();
} catch (Exception e) {
throw new IllegalStateException(e);
}
}
@Override
public void shutdownRequested(ShutdownRequestedInput shutdownRequestedInput) {
}
}
static class TestKinesisRecordService {
private List<ProcessRecordsInput> firstStreamRecords = Collections.synchronizedList(new ArrayList<>());
private List<ProcessRecordsInput> secondStreamRecords = Collections.synchronizedList(new ArrayList<>());
public void addRecord(StreamIdentifier streamIdentifier, ProcessRecordsInput processRecordsInput) {
if(streamIdentifier.streamName().contains(firstStreamName)) {
firstStreamRecords.add(processRecordsInput);
} else if(streamIdentifier.streamName().contains(secondStreamName)) {
secondStreamRecords.add(processRecordsInput);
} else {
throw new IllegalStateException("no list for stream " + streamIdentifier);
}
}
public List<ProcessRecordsInput> getFirstStreamRecords() {
return Collections.unmodifiableList(firstStreamRecords);
}
public List<ProcessRecordsInput> getSecondStreamRecords() {
return Collections.unmodifiableList(secondStreamRecords);
}
}
public static final String firstStreamName = "first-stream-name";
public static final String secondStreamName = "second-stream-name";
public static final String partitionKey = "partition-key";
DockerImageName localstackImage = DockerImageName.parse("localstack/localstack:latest");
@Container
public LocalStackContainer localstack = new LocalStackContainer(localstackImage)
.withServices(KINESIS, CLOUDWATCH)
.withEnv("KINESIS_INITIALIZE_STREAMS", firstStreamName + ":1," + secondStreamName + ":1");
public Scheduler scheduler;
public TestKinesisRecordService service = new TestKinesisRecordService();
public KinesisProducer producer;
@BeforeEach
void setup() {
KinesisAsyncClient kinesisClient = KinesisClientUtil.createKinesisAsyncClient(
KinesisAsyncClient.builder().endpointOverride(localstack.getEndpointOverride(KINESIS)).region(Region.of(localstack.getRegion()))
);
DynamoDbAsyncClient dynamoClient = DynamoDbAsyncClient.builder().region(Region.of(localstack.getRegion())).endpointOverride(localstack.getEndpointOverride(DYNAMODB)).build();
CloudWatchAsyncClient cloudWatchClient = CloudWatchAsyncClient.builder().region(Region.of(localstack.getRegion())).endpointOverride(localstack.getEndpointOverride(CLOUDWATCH)).build();
MultiStreamTracker tracker = new MultiStreamTracker() {
private List<StreamConfig> configs = List.of(
new StreamConfig(singleStreamInstance(firstStreamName), InitialPositionInStreamExtended.newInitialPosition(TRIM_HORIZON)),
new StreamConfig(singleStreamInstance(secondStreamName), InitialPositionInStreamExtended.newInitialPosition(TRIM_HORIZON)));
@Override
public List<StreamConfig> streamConfigList() {
return configs;
}
@Override
public FormerStreamsLeasesDeletionStrategy formerStreamsLeasesDeletionStrategy() {
return new NoLeaseDeletionStrategy();
}
};
ConfigsBuilder configsBuilder = new ConfigsBuilder(tracker, "KinesisPratTest", kinesisClient, dynamoClient, cloudWatchClient, UUID.randomUUID().toString(), new TestProcessorFactory(service));
scheduler = new Scheduler(
configsBuilder.checkpointConfig(),
configsBuilder.coordinatorConfig(),
configsBuilder.leaseManagementConfig(),
configsBuilder.lifecycleConfig(),
configsBuilder.metricsConfig(),
configsBuilder.processorConfig().callProcessRecordsEvenForEmptyRecordList(false),
configsBuilder.retrievalConfig()
);
new Thread(scheduler).start();
producer = producer();
}
@AfterEach
public void teardown() throws ExecutionException, InterruptedException, TimeoutException {
producer.destroy();
Future<Boolean> gracefulShutdownFuture = scheduler.startGracefulShutdown();
gracefulShutdownFuture.get(60, TimeUnit.SECONDS);
}
public KinesisProducer producer() {
var configuration = new KinesisProducerConfiguration()
.setVerifyCertificate(false)
.setCredentialsProvider(localstack.getDefaultCredentialsProvider())
.setMetricsCredentialsProvider(localstack.getDefaultCredentialsProvider())
.setRegion(localstack.getRegion())
.setCloudwatchEndpoint(localstack.getEndpointOverride(CLOUDWATCH).getHost())
.setCloudwatchPort(localstack.getEndpointOverride(CLOUDWATCH).getPort())
.setKinesisEndpoint(localstack.getEndpointOverride(KINESIS).getHost())
.setKinesisPort(localstack.getEndpointOverride(KINESIS).getPort());
return new KinesisProducer(configuration);
}
@Test
void testFirstStream() {
String expected = "Hello";
producer.addUserRecord(firstStreamName, partitionKey, ByteBuffer.wrap(expected.getBytes(StandardCharsets.UTF_8)));
var result = await().timeout(600, TimeUnit.SECONDS)
.until(() -> service.getFirstStreamRecords().stream()
.flatMap(r -> r.records().stream())
.map(KinesisClientRecord::data)
.map(r -> StandardCharsets.UTF_8.decode(r).toString())
.collect(toList()), records -> records.size() > 0);
assertThat(result).anyMatch(r -> r.equals(expected));
}
@Test
void testSecondStream() {
String expected = "Hello";
producer.addUserRecord(secondStreamName, partitionKey, ByteBuffer.wrap(expected.getBytes(StandardCharsets.UTF_8)));
var result = await().timeout(600, TimeUnit.SECONDS)
.until(() -> service.getSecondStreamRecords().stream()
.flatMap(r -> r.records().stream())
.map(KinesisClientRecord::data)
.map(r -> StandardCharsets.UTF_8.decode(r).toString())
.collect(toList()), records -> records.size() > 0);
assertThat(result).anyMatch(r -> r.equals(expected));
}
}
Here is the error I am getting.
[Thread-9] ERROR software.amazon.kinesis.coordinator.Scheduler - Worker.run caught exception, sleeping for 1000 milli seconds!
java.lang.IllegalArgumentException: Unable to deserialize StreamIdentifier from first-stream-name
at software.amazon.kinesis.common.StreamIdentifier.multiStreamInstance(StreamIdentifier.java:75)
at software.amazon.kinesis.coordinator.Scheduler.getStreamIdentifier(Scheduler.java:1001)
at software.amazon.kinesis.coordinator.Scheduler.buildConsumer(Scheduler.java:917)
at software.amazon.kinesis.coordinator.Scheduler.createOrGetShardConsumer(Scheduler.java:899)
at software.amazon.kinesis.coordinator.Scheduler.runProcessLoop(Scheduler.java:419)
at software.amazon.kinesis.coordinator.Scheduler.run(Scheduler.java:330)
at java.base/java.lang.Thread.run(Thread.java:829)
When I execute the module, its output is as follows:
{'TEST': {'pid': 116441,
'retcode': 0,
'stdout': ' total used free shared buff/cache available\nMem: 503 341 31 3 131 120\nSwap: 31 0 31',
'stderr': ''}}
When I execute the module via state, its output is as follows:
{'TEST': {'cmd_|-test_|-free -g_|-run': {'name': 'free -g',
'changes': {'pid': 107058,
'retcode': 0,
'stdout': ' total used free shared buff/cache available\nMem: 503 341 30 3 131 120\nSwap: 31 0 31',
'stderr': ''},
'result': True,
'comment': 'Command "free -g" run',
'__sls__': 'ha_action',
'__run_num__': 0,
'start_time': '15:44:10.604521',
'duration': 141.64,
'__id__': 'test'}}}
The first output is what I want. It has minion_id, and you can also directly get the execution status and execution output. The second kind of output is a little more complicated. I just want to get the execution output and status. This kind of display increases the complexity of getting it. Does Saltstack have parameters or methods to define the output format of state?
As next js itself is a full stack framework. Should I use it as a full stack or just use it in the front end for SSR and Node and Express for the backend? I am using MongoDB as my database
I want to use JS for my full app, so I am not using Django or Laravel over here. I am new to NextJS so please kindly suggest to me what would be better to choose and why. Thanks in advance.
I have an index and our post-indexing step, we re-index the index and after, the pipeline (written in Java) uses an aggregation query to fetch product attributes (for example categories and their counts), then we cache these values. I noticed after that the values between index and the cache are not same.
I believe that the problem is we try to fetch categories immediately after re-indexing and index may be not ready to be read (meaning, re-indexing may not be in a completely finished state). Some of the answers includes Refresh API with Java:
IndexRequest indexRequest = new IndexRequest(indexAlias);
indexRequest.setRefreshPolicy(WriteRequest.RefreshPolicy.WAIT_UNTIL);
but my operation is not actually a IndexRequest, it is SearchSourceBuilder since we are passing an aggregation query to the index.
How can I know that my index is fully ready to be read its most up-to-date data?
I am trying to create a sudoku solver that will solve classic 9x9 sudoku boards. To accomplish this I am using multiple different logic methods such as solving cells/squares when there is only one possible answer, however I cannot cover every single situation. Currently when my code runs into a dead end and loops indefinetily I have tried to implement code that would guess a number and check to see if the board can be solved with that guess. If this doesn't work the program should revert to the previous board state. I have been stuck on this for many many hours with no progress This is the link to the GitHub for my code, "https://github.com/Yeldood/SOCS/tree/main/SudokuSolve/Sudoku%20Solver/src/sudokuCode
And this is the code I belive is problematic,
public void logicCycles()throws Exception
{
Stack <Cell[][]> stack = new Stack<Cell[][]>();
Board[] manta = new Board[81];
int pointer = 0;
boolean check = false;
int previousSolve = -3;
System.out.println("2");
while(isSolved() == false)
{
int changesMade = 0;
do
{
changesMade = 0;
changesMade += logic1();
//changesMade += logic2();
// changesMade += logic3();
// changesMade += logic4();
// System.out.println("P:");
// displayPotentials();
display();
// displayPotentials();
Thread.sleep(10);///TODO: get rid of
if(errorFound()) {//reverts board to previous version, and eliminate previous guess
pointer--;
board = manta[pointer].board;//reverts board to previous version
for(int y = 0; y < 9; y++)
for(int x = 0; x < 9; x++)
if(board[y][x].getNumber() == 0 && board[y][x].numberOfPotentials() > 1)
board[y][x].cantBe(previousSolve);
else if(board[y][x].getNumber() == 0 && board[y][x].numberOfPotentials() == 1) {
solve(x,y, board[y][x].getFirstPotential());
}
changesMade++;
// stack.pop();
// for(int y = 0; y < 9; y++)//eliminate prev guess
// for(int x = 0; x < 9; x++)
// if(board[y][x].getNumber() == 0 && board[y][x].numberOfPotentials() > 1) {
// board[y][x].cantBe(board[y][x].getFirstPotential());
// }
}
System.out.println("Changes made: " + changesMade);
}while(changesMade != 0);
check = false;
for(int y = 0; y < 9; y++){//Copy board then guess the first potential of the first cell with current board
for(int x = 0; x < 9; x++){
if(board[y][x].getNumber() == 0 && board[y][x].numberOfPotentials() > 1) {
// stack.push(board);
manta[pointer] = new Board();
manta[pointer].board = boardCopy().board;
// board = boardCopy().board;
previousSolve = board[y][x].getFirstPotential();
solve(x,y,previousSolve);//Solves for a guess
System.out.println("Board guess made");
System.out.println("New board");
display();
check = true;
pointer++;
break;
}
}
if(check)
break;
}
}
}
public Board boardCopy() {
boolean[] currentPotential = new boolean[10];
Board temp = new Board();
for(int h = 0; h < 9; h++)
for(int l = 0; l < 9; l++) {
currentPotential = new boolean[10];
for(int u = 1; u < 10; u++) {
currentPotential[u] = board[h][l].canBe(u);
}
Cell tempCell = new Cell();
temp.board[h][l] = tempCell;
temp.board[h][l].setBoxID(board[h][l].getBoxID());
temp.board[h][l].setNumber(board[h][l].getNumber());
temp.board[h][l].setPotential(currentPotential);
}
return temp;
}
I have tried testing it without the guessing and it worked perfectly fine without any exceptions, thus this leads me to conclude that the guessing is the problem. The pointer I use for my board array ends up as negative one which leads to a null value for my board array, manta.
I need to retrieve data of an existing or not Network and if it's not then it should be created i need to integrate this script in CI/CD pipeline
this is my code
data "openstack_networking_network_v2" "existed_network" {
name = "Network-name"
}
resource "openstack_networking_network_v2" "edums_network" {
count = data.openstack_networking_network_v2.existed_network.[*].id == "" ? 1:0
name = "Network-name"
}
i end up with this error when the network is not created Error: Your query returned no results. Please change your search criteria and try again.
I'm trying a simple routing with Expo and React Navigation, but it doesn't show anything on the screen.
This is my referral code
`import * as React from 'react';
import { NavigationContainer } from '@react-navigation/native';
import { View, Text } from 'react-native';
import { createNativeStackNavigator } from '@react-navigation/native-stack';
const Stack = createNativeStackNavigator();
function Home() {
console.log('Home');
return (
<View>
<Text>Home</Text>
</View>
);
}
function MyStack() {
return (
<Stack.Navigator>
<Stack.Screen name="Home" component={Home} />
</Stack.Navigator>
);
}
export default function Navigation() {
return (
<NavigationContainer>
<MyStack />
</NavigationContainer>
);
}`
This is my App.js code
import 'react-native-gesture-handler';
import { StatusBar } from 'expo-status-bar';
import { StyleSheet, Text, View } from 'react-native';
import Navigation from './src/navigation/Navigation';
export default function App() {
return (
<View style={styles.container}>
<Navigation />
</View>
);
}
const styles = StyleSheet.create({
container: {
flex: 1,
backgroundColor: 'red',
alignItems: 'center',
justifyContent: 'center',
},
});
In this short piece of code, console.log() in the Home component runs and prints Home to the console, but nothing appears on the screen.
Below is the content of the package.json file
{
"name": "new-dictionary",
"version": "1.0.0",
"main": "node_modules/expo/AppEntry.js",
"scripts": {
"start": "expo start",
"android": "expo start --android",
"ios": "expo start --ios",
"web": "expo start --web"
},
"dependencies": {
"@react-navigation/native": "^6.1.9",
"@react-navigation/native-stack": "^6.9.17",
"@react-navigation/stack": "^6.3.20",
"expo": "~49.0.15",
"expo-status-bar": "~1.6.0",
"react": "18.2.0",
"react-native": "0.72.6",
"react-native-gesture-handler": "~2.12.0",
"react-native-reanimated": "~3.3.0",
"react-native-safe-area-context": "4.6.3",
"react-native-screens": "~3.22.0"
},
"devDependencies": {
"@babel/core": "^7.20.0"
},
"private": true
}
I would be very happy if you help me. Thanks
I have data comming from an api like:
const abc = [
{
date: '2023-12-8',
value: 'mop'
},{
date: '2023-10-8',
value: 'qrs'
}
]
How to bind two input to act as date range? Date is comming as a string how can we filter in them?. What is the best way to solve this kind of issue of date range??
export const columns: ColumnDef<VTDATA>[] = [
{
accessorKey: "value",
header: "Value",
},
{
accessorKey: "date",
header: "Date",
}]
i have an enum where i have to perform pattern matching. But since i am running the program on VM which has limited stack memory ( < 4Kb ), i allocated the enum on the heap using Box
. But while performing pattern matching, i have to dereference it which causes it to get allocated on stack which i dont need. Is it possible to perform pattern matching on Boxed values?
I want to achieve something like below.
pub struct MyEnum {
A,
B,
}
let a = Box::new(MyEnum::A);
let value = match a {
MyEnum::A => 1,
MyEnum::B => 2
}
This is the error that i get
error[E0308]: mismatched types
--> src/entrypoint.rs:119:9
|
118 | match a {
| ---
| |
| this expression has type `Box<MyEnum>`
| help: consider dereferencing the boxed value: `*a`
119 | MyEnum::A => 1,
| ^^^^^^^^^^^^^^^ expected struct `Box`, found enum `MyEnum`
|
= note: expected struct `Box<MyEnum>`
found enum `MyEnum`
I tried dereferencing it as the compiler suggested, but since it allocated on stack while dereferencing, i could not use it. I want to match the enum without dereferencing it.
I am using TanStack v8 to represent tabular data. Below is my app(HomePage) structure. On HomePage, I am fetching data from API using useEffect and passing it to EmailTable.
<Header />
<Statistics />
{emails.length > 0 ? (
<EmailTable
emails={emails}
isLoading={isLoading}
/>
) : (
<Spinner />
)}
In EmailTable component, I am able to access/log the emails(api data). But, when I am passing that emails to useReactTable, I am getting multiple errors like Cannot read properties of undefined (reading 'length'). If I am storing emails data to some different variable and passing that variable to useReactTable, it is working fine.