Normal view
- Taking the inverse (not the reciprocal) of both sides of an inequality โ math.stackexchange.comThis is something I'm having a hard time finding online, but say we know that $f(x) > g(x)$ (for all inputs $x > a_{0}$ for some $a_{0}$), then would it always be true that $f^{-1}(x) < ...
- python function set value according to master dataframeI need to define a function that, depending on the value of the input variable and a master dataframe, the function assigns the return value according to a master dataframe. This is the master (df_master) dataframe the function has to recieve and, depending on it and the input value, return value: value (from) value (until) value to return 0 30 1 31 50 2 51 100 3 For example, if input value = 10, return 1 For example, if input value = 90, return 3 ... The function should be something like: def
python function set value according to master dataframe
I need to define a function that, depending on the value of the input variable and a master dataframe, the function assigns the return value according to a master dataframe.
This is the master (df_master) dataframe the function has to recieve and, depending on it and the input value, return value:
value (from) | value (until) | value to return |
---|---|---|
0 | 30 | 1 |
31 | 50 | 2 |
51 | 100 | 3 |
For example, if input value = 10, return 1
For example, if input value = 90, return 3
...
The function should be something like:
def assing_value (input_var, df_master):
...
Thanks.
def assign(input_variable, df_master):
...
- Any Ideas Why my VAE Model's Reconstructed Loss Remains at 0.69?My task is to use a VAE model for binary classification. The encoder part will use an LSTM model, while the decoder will use an MLP. My data is time series data, which can be seen as 20 input features and 1 output target (0 or 1). First, I used a standalone LSTM model for classification, and the loss converged. Then, I used an AE model, where the encoder is LSTM and the decoder is MLP, and the model coverage was good. However, when I used the VAE model, my reconstructed loss remained around 0.69
Any Ideas Why my VAE Model's Reconstructed Loss Remains at 0.69?
My task is to use a VAE model for binary classification. The encoder part will use an LSTM model, while the decoder will use an MLP. My data is time series data, which can be seen as 20 input features and 1 output target (0 or 1).
- First, I used a standalone LSTM model for classification, and the loss converged.
- Then, I used an AE model, where the encoder is LSTM and the decoder is MLP, and the model coverage was good.
- However, when I used the VAE model, my reconstructed loss remained around 0.69, which basically means the model did not learn anything but flip the coins, at the same time, my KL divergence decreased to a relatively small number. So, I suspect there might be a problem in calculating mu and log var.
class LSTMEncoder(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, latent_size):
super(LSTMEncoder, self).__init__()
self.num_layers = num_layers
self.hidden_size = hidden_size
self.latent_size = latent_size
self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True, dropout=0.2)
self.fc_mu = nn.Linear(hidden_size, latent_size)
self.fc_logvar = nn.Linear(hidden_size, latent_size)
def forward(self, x):
batch_size = x.size(0)
h0 = torch.zeros(self.num_layers, batch_size, self.hidden_size).to(x.device)
c0 = torch.zeros(self.num_layers, batch_size, self.hidden_size).to(x.device)
out, _ = self.lstm(x, (h0, c0))
out = out[:, -1, :]
mu = self.fc_mu(out)
logvar = self.fc_logvar(out)
return mu, logvar
class MLPDecoder(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(MLPDecoder, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
self.act1 = nn.LeakyReLU()
self.fc2 = nn.Linear(hidden_size, hidden_size)
self.act2 = nn.LeakyReLU()
self.fc3 = nn.Linear(hidden_size, output_size)
self.act3 = nn.Sigmoid()
def forward(self, x):
x = self.fc1(x)
x = self.act1(x)
x = self.fc2(x)
x = self.act2(x)
x = self.fc3(x)
x = self.act3(x)
return x
class VAE(nn.Module):
def __init__(self, input_size, hidden_size_encoder, latent_size, hidden_size_decoder, output_size, num_layers):
super(VAE, self).__init__()
self.encoder = LSTMEncoder(input_size, hidden_size_encoder, num_layers, latent_size)
self.decoder = MLPDecoder(latent_size, hidden_size_decoder, output_size)
def reparameterize(self, mu, logvar):
if self.training:
std = torch.exp(0.5 * logvar)
eps = torch.randn_like(std)
return eps.mul(std).add_(mu)
else:
return mu
def forward(self, x):
mu, logvar = self.encoder(x)
z = self.reparameterize(mu, logvar)
decoded = self.decoder(z)
return decoded, mu, logvar
def vae_loss(recon_x, x, mu, logvar):
reconstruction_loss = F.binary_cross_entropy(recon_x, x, reduction='mean')
kl_divergence_loss = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
return reconstruction_loss + kl_divergence_loss, reconstruction_loss, kl_divergence_loss
I've tried to change the hyperparameters, including hidden and latent dimensions, learning rate adjustments, adding more layers to the MLP or LSTM, and incorporating regularization techniques like L1 and L2. However, none of these adjustments have resulted in improvement.
I want to understand why this issue occurred. My goal is to achieve convergence with this model so that I can use it for my binary classification task.
- Separating Gamma in two independent functions โ mathoverflow.netI've encountered a problem in my PhD. I would greatly appreciate any suggestions, tips, or comments you might have. The problem is Let $\Gamma(s,x)$ be the incomplete gamma function. Given integers $n ...
Separating Gamma in two independent functions โ mathoverflow.net
- Output a 1-2-3 sequence โ codegolf.stackexchange.comFor the purposes of this challenge, a 1-2-3 sequence is an infinite sequence of increasing positive integers such that for any positive integer \$n\$, exactly one of \$n, 2n,\$ and \$3n\$ appears in ...
Output a 1-2-3 sequence โ codegolf.stackexchange.com
- Python sorted() compare functionI am trying to construct a suffix array from an implicit suffix tree. Here is my node structure, the label is the start index of the suffix if the node is a leaf. the branches are stored in a dict like this {(1, 2): someNode, (0,0):someOtherNode}, the key is a tuple of start, end index of the sub-string on that branch, the value is the node that branch is connected to. class Node: def __init__(self): self.branches = {} self.label = -1 I'm trying to use a dfs to get the suffi
Python sorted() compare function
I am trying to construct a suffix array from an implicit suffix tree.
Here is my node structure, the label is the start index of the suffix if the node is a leaf. the branches are stored in a dict like this {(1, 2): someNode, (0,0):someOtherNode}, the key is a tuple of start, end index of the sub-string on that branch, the value is the node that branch is connected to.
class Node:
def __init__(self):
self.branches = {}
self.label = -1
I'm trying to use a dfs to get the suffix array. However, the dfs should be lexicographically ordered.
Since I am using an implicit suffix tree where the indexes are stored on the branches instead of the actual sub-string, I am wondering how I can run the dfs in lexicographical order.
So I guess now the problem is down to sorting the dict on the sub-string indicated by the indexes in the keys. Can I use a compare function to do that? I'm really not sure how.
def depth_first_traversal(node, suffix_array):
if node.label != -1: # If the node is a leaf
suffix_array.append(node.label)
for _, child_node in sorted(node.branches.items()):
depth_first_traversal(child_node, suffix_array)
def construct_suffix_array(root):
suffix_array = []
depth_first_traversal(root, suffix_array)
return suffix_array
text = "mississippi$"
suffix_tree_root = buildSuffixTree(text)
suffix_array = construct_suffix_array(suffix_tree_root)
- Why can't I use a list of functions returning a generic class with different type parameters in C#I'm working on a C# project where I have an interface IAnimal and two classes Dog and Cat that implement this interface. I also have a generic class Zoo<T> where T is a type that implements IAnimal. Here's the relevant code: public interface IAnimal { /* ... */ } public class Dog : IAnimal { /* ... */ } public class Cat : IAnimal { /* ... */ } public class Zoo<T> where T : IAnimal { /* ... */ } I'm trying to create two functions CreateZooOfDog and CreateZooOfCat that return a Zoo
Why can't I use a list of functions returning a generic class with different type parameters in C#
I'm working on a C# project where I have an interface IAnimal
and two classes Dog
and Cat
that implement this interface. I also have a generic class Zoo<T>
where T
is a type that implements IAnimal
.
Here's the relevant code:
public interface IAnimal { /* ... */ }
public class Dog : IAnimal { /* ... */ }
public class Cat : IAnimal { /* ... */ }
public class Zoo<T> where T : IAnimal { /* ... */ }
I'm trying to create two functions CreateZooOfDog
and CreateZooOfCat
that return a Zoo<Dog>
and Zoo<Cat>
respectively. I then want to add these functions to a list and call them in a loop. Here's what I tried:
public Zoo<Dog> CreateZooOfDog() { /* ... */ }
public Zoo<Cat> CreateZooOfCat() { /* ... */ }
var zooes = new List<Func<Zoo<IAnimal>>>();
Func<Zoo<Dog>> zooOfDog = () => program.CreateZooOfDog();
Func<Zoo<Cat>> zooOfCat = () => program.CreateZooOfCat();
zooes.Add(zooOfDog);
zooes.Add(zooOfCat );
However, I'm getting a compile-time error on the CreateZooOfDog
and CreateZooOfCat
" .
I don't understand why this is happening since Dog
and Cat
both implement IAnimal
. Could someone explain why this is happening and how I can fix it?"
- Efficient way to use apply a function on Pandas rows [duplicate]I am looking for an efficient way to apply a function on each row of a dataframe to perform some operation and repeat the row by a number defined in other column. Currently, I am doing it by iterate on each row, but it takes too long on a large dataframe. Sample code is as below: `import pandas as pd def my_func(row): row = row.to_frame().T repeated_row = row.loc[row.index.repeat(row['col2'])] return repeated_row df = pd.DataFrame(data = {'col1':list('abc'),
Efficient way to use apply a function on Pandas rows [duplicate]
I am looking for an efficient way to apply a function on each row of a dataframe to perform some operation and repeat the row by a number defined in other column. Currently, I am doing it by iterate on each row, but it takes too long on a large dataframe.
Sample code is as below:
`import pandas as pd
def my_func(row):
row = row.to_frame().T
repeated_row = row.loc[row.index.repeat(row['col2'])]
return repeated_row
df = pd.DataFrame(data = {'col1':list('abc'),
'col2': [2,2,3]})
df_comb = pd.DataFrame()
for i, row in df.iterrows():
df_rep = my_func(row)
df_comb = pd.concat([df_comb, df_rep], axis=0)`
However, I want a solution that's not using the for loop as above and I couldn't find an answer for this historically. I imagine there will be an equivalent way to use "apply" function to this df, such as:
df_comp = pd.concat([df.apply(lambda row: my_func(row)), axis=1], axis=0)
But at the moment this syntax does not work properly.
Much appreciated if you could point out the correct solution.
- Trying to create a callback function from one widget to another in Flutter. It won't workI'm trying to create the simple "press button to increase number" program in Flutter. I separated the text widget that shows the number and the button that increases the number into their own classes and files. I then imported them to the main file. I'm trying to use a callback function to connect the two widget together to create functionality. I doesn't show any errors but the button doesn't do anything. Here is the main file: import 'package:flutter/material.dart'; import 'package:searchbarte
Trying to create a callback function from one widget to another in Flutter. It won't work
I'm trying to create the simple "press button to increase number" program in Flutter. I separated the text widget that shows the number and the button that increases the number into their own classes and files. I then imported them to the main file.
I'm trying to use a callback function to connect the two widget together to create functionality. I doesn't show any errors but the button doesn't do anything.
Here is the main file:
import 'package:flutter/material.dart';
import 'package:searchbartest/number.dart';
import 'package:searchbartest/button.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
body: Center(
child: Row(
children: [
const SizedBox(width: 20,),
AddButton(increaseNumberCallBack: (){},),
const SizedBox(width: 20,),
const Number(),
],
),
)),
);
}
}
Here is the file containing the text widget that shows the number and the function to increase it:
import 'package:flutter/material.dart';
class Number extends StatefulWidget {
const Number({super.key});
@override
State<Number> createState() => _NumberState();
}
class _NumberState extends State<Number> {
void increaseNumber() {
setState(() {
number ++;
});
}
int number = 0;
@override
Widget build(BuildContext context) {
return Text(number.toString(),
style: const TextStyle(
fontSize: 50.0
),
);
}
}
Here is the file containing the button that is supposed to increase the number and my poor attempt at making a callback function:
import 'package:flutter/material.dart';
class AddButton extends StatefulWidget {
const AddButton({super.key, required this.increaseNumberCallBack});
final Function() increaseNumberCallBack;
void increaseNumber() {
increaseNumberCallBack();
}
@override
State<AddButton> createState() => _AddButtonState();
}
class _AddButtonState extends State<AddButton> {
@override
Widget build(BuildContext context) {
return IconButton(
onPressed: () {
widget.increaseNumber();
},
icon: const Icon(Icons.add,
size: 50,
));
}
}
For some reason it's not working. Thanks for any help.
- Lindelรถf hypotheses for derivatives of zeta โ mathoverflow.netThe Lindelรถf hypothesis says that if we have: $$\zeta(\sigma+iT)=\mathcal O(T^a)$$ Then if one considers $\sigma=1/2$ then $\inf a=0$. Further, from convexity and the functional equation this implies ...
Lindelรถf hypotheses for derivatives of zeta โ mathoverflow.net
- Find the center of all circles that touch the x-axis and a circle around the origin โ math.stackexchange.comGiven a circle $C$ of radius $1$ around the origin, I want to determine the locus of the centers of all circles that touch $C$ and the $x$-axis. This is the red curve in the following Desmos plot, ...
Find the center of all circles that touch the x-axis and a circle around the origin โ math.stackexchange.com
- Analysing an array of objects data using my function and Math.maxAssuming an array of objects that has a couple hundred fields that look like this [ { "designation":"419880 (2011 AH37)", "discovery_date":"2011-01-07T00:00:00.000", "h_mag":19.7, "moid_au":0.035, "q_au_1":0.84, "q_au_2":4.26, "period_yr":4.06, "i_deg":9.65, "pha":true, "orbit_class":"Apollo" } I'm trying to show the maximum "h_mag" value for all of the data points that I have isolated with the following function: function filterByPHA (neowise){ for
Analysing an array of objects data using my function and Math.max
Assuming an array of objects that has a couple hundred fields that look like this
[
{
"designation":"419880 (2011 AH37)",
"discovery_date":"2011-01-07T00:00:00.000",
"h_mag":19.7,
"moid_au":0.035,
"q_au_1":0.84,
"q_au_2":4.26,
"period_yr":4.06,
"i_deg":9.65,
"pha":true,
"orbit_class":"Apollo"
}
I'm trying to show the maximum "h_mag" value for all of the data points that I have isolated with the following function:
function filterByPHA (neowise){
for (let i = 0; i < neowise.length; i++) {
let neo = neowise[i];
if (neo.pha === true) {
console.log(`${neo.designation}: ${neo.orbit_class}`);
}
}
}
filterByPHA(neowise);
The function works.
I have tried the following:
const maxMOID = Math.max(...filterByPHA(neowise).map(function(x){
return x.moid_au;
}));
console.log(maxMOID);
What I think this code should be doing is 'applying' Math.max to my function ('filterByPHA(neowise)' and 'mapping' it onto a new function that returns the maximum moid value for the array inside 'filterByPHA(neowise)'. However, the .map is giving me a 'TypeError: Cannot read properties of undefined (reading 'map')'. The 'x' is just a placeholder. I'm not actually clear on what I need to be putting there to make this code work, or if even it could be a functional bit of code.
- Using MapThread with pure function and variable number of elements โ mathematica.stackexchange.comI have a variable size array of lists and would like to set up a MapThread of the lists with a variable pure function $\{\#1,\#2,...,\#n\}\&$ with $n$ being the number of rows in the array and am ...
Using MapThread with pure function and variable number of elements โ mathematica.stackexchange.com
- How to set dotnet function version similar to dotnet web app assembly versionI have a dotnet function app and trying to set the assembly version similar to the dotnet web app. But when I try to read the assembly version I am only receiving the dotnet function runtime version. I need a way to set the version of the function app in build time and read that during runtime time. dotnet publish -c Release -p:UseAppHost=false -o buildFolder project.csproj with the above publish command I want to pass a version number and I should be able to read the version in runtime I am a
How to set dotnet function version similar to dotnet web app assembly version
I have a dotnet function app and trying to set the assembly version similar to the dotnet web app. But when I try to read the assembly version I am only receiving the dotnet function runtime version. I need a way to set the version of the function app in build time and read that during runtime time.
dotnet publish -c Release -p:UseAppHost=false -o buildFolder project.csproj
with the above publish command I want to pass a version number and I should be able to read the version in runtime
I am able to do that in a dotnet web app with the below command
dotnet publish -c Release -o buildFolder project.csproj -p:Version=$(version)
and I am reading like this
string version = Assembly.GetEntryAssembly().GetName().Version.ToString();
I want similar thing in dotnet function app
- Render curve along any path in CSS?I'm writing a little desktop applet to create and test cubic-bรฉzier functions. Here is a screenshot: I rendered the actual curve by literally placing A Thousand (1000!) dots of 2x2 px on that view according to the cubic-bรฉzier function. As you can see this "illusion of a curved line" is incredibly jagged. Surely, there must be a better way to do this. And if not, how can I get the points to look less jagged? I did think about putting a blur or shadow on each point, but then I remembered I'm ren
Render curve along any path in CSS?
I'm writing a little desktop applet to create and test cubic-bรฉzier functions. Here is a screenshot:
I rendered the actual curve by literally placing A Thousand (1000!) dots of 2x2 px on that view according to the cubic-bรฉzier function. As you can see this "illusion of a curved line" is incredibly jagged. Surely, there must be a better way to do this. And if not, how can I get the points to look less jagged? I did think about putting a blur or shadow on each point, but then I remembered I'm rendering a new set of One Thousand points every time one of the control points changes, and on second thought I didn't want my PC to blow up. Another idea that I had was to kind of rotate the dots orthogonal to the curve's slope at the respectful point, but I'm not sure if that'd work, or how I'd even go about achieving it.
Thanks in advance for any suggestions!
- Why you shouldn't use online compilers (original: C++ function returning different values for the same input) [closed]Original question: I was solving a problem where multiple "cases" are given in a single "input" to be solved independently. However, I noticed a strange phenomenon where my code returns a different answer for the same "case" depending on the previous "case". My code looks like this: #include <iostream> #include <stdlib.h> #include <algorithm> #include <vector> using namespace std; bool cmp(pair<int, int> a, pair<int, int> b) { return a.first < b.first;
Why you shouldn't use online compilers (original: C++ function returning different values for the same input) [closed]
Original question: I was solving a problem where multiple "cases" are given in a single "input" to be solved independently. However, I noticed a strange phenomenon where my code returns a different answer for the same "case" depending on the previous "case".
My code looks like this:
#include <iostream>
#include <stdlib.h>
#include <algorithm>
#include <vector>
using namespace std;
bool cmp(pair<int, int> a, pair<int, int> b) {
return a.first < b.first;
}
void solve() {
int N, Q, C;
cin >> N >> Q >> C;
vector<int> val(N+1);
for (int i = 0; i < N; i++) {
cin >> val[i];
}
vector<pair<int, int> > pairs, fin;
for (int i = 0; i < Q; i++) {
int a, b;
cin >> a >> b;
a--; b--;
pairs.push_back(make_pair(a, b));
}
sort(pairs.begin(), pairs.end(), cmp);
fin.push_back(pairs[0]);
for (int i = 1; i < Q; i++) {
if (pairs[i].first < pairs[i-1].second) {
if (pairs[i].second != pairs[i-1].second) {
cout << -1 << '\n';
return;
}
} else {
fin.push_back(pairs[i]);
}
}
int maxim = 0, curlastz = 0, lastz = 0, cur = 0, le = fin.size();
for (int i = 0; i < N; i++) {
if (val[i] == 0) {
val[i] = 1;
lastz = i;
}
cur = min(cur, le);
if (i <= fin[cur].first) {
maxim = max(maxim, val[i]);
if (i == lastz) {
val[i] = 1;
curlastz = i;
}
} else if (i < fin[cur].second) {
if (val[i] > maxim) {
maxim = val[i];
val[curlastz] = maxim;
} else if (val[i] == 0) {
val[i] = 1;
}
} else if (i == fin[cur].second) {
if (i != lastz) {
if (val[i] <= maxim) {
cout << -1 << '\n';
cout << i << maxim;
return;
}
maxim = val[i];
} else {
maxim++;
val[i] = maxim;
}
curlastz = lastz;
cur++;
} else if (i > fin[cur].second) {
if (val[i] == 0) val[i] = 1;
}
}
for (int i = 0; i < N; i++) {
if (val[i] > C) {
cout << -1 << '\n';
return;
}
}
maxim = val[0]; cur = 0;
for (int i = 0; i < N; i++) {
cur = min(cur, le);
if (i <= fin[cur].first) {
maxim = max(maxim, val[i]);
} else if (i < fin[cur].second) {
if (val[i] > maxim) {
cout << -1 << '\n';
return;
}
} else if (i == fin[cur].second) {
if (val[i] <= maxim) {
cout << -1 << '\n';
return;
}
maxim = val[i];
cur++;
} else if (i > fin[cur].second) {
break;
}
}
cout << val[0];
for (int i = 1; i < N; i++) {
cout << ' ' << val[i];
}
cout << '\n';
}
int main() {
ios_base::sync_with_stdio(false);
cin.tie(NULL);
cout.tie(NULL);
int T;
cin >> T;
for (int cases = 0; cases < T; cases++) {
solve();
}
}
According to my understanding, in this case, the solve() function should be reset every time, with new input values. However, for some reason, my code keeps outputting different values for the same test "case" depending on what "case" comes before it (from what I can tell by testing out different test inputs). Why is this happening and what can I do to fix it?
INPUT CASES
1st "input"
1
10 1 5
0 0 0 0 0 0 0 0 0 0
1 2
The 1 at the top indicates that only one "case" is provided. My output was:
1 2 1 1 1 1 1 1 1 1
which was expected.
However, when the input changes to
2
10 2 8
1 0 0 0 0 5 7 0 0 0
4 6
6 9
10 1 5
0 0 0 0 0 0 0 0 0 0
1 2
my output becomes
-1
1 2 1 1 1 1 1 1 3 1
The first -1 is expected, but notice how the second line has a three which was not present when it was the only "case" tested.
I have found out that this problem only occurs in the online compiler that I was using, and works just fine in VS Code. I guess I learned my lesson to always try to use 'legitimate' compilers from now on :)
- why does result.upper() function is not working on the decoratorWhy does result.upper is not working. It should return the good morning in upper case but it didn't return. def upper_function(original_function): def wrapper_function(original_function): result=original_function() result.upper() return return wrapper_function def inner_function(original_function): def wrapper_function(): return return @upper_function def greet(a): print(a) greet("good morning")
why does result.upper() function is not working on the decorator
Why does result.upper is not working. It should return the good morning in upper case but it didn't return.
def upper_function(original_function):
def wrapper_function(original_function):
result=original_function()
result.upper()
return
return wrapper_function
def inner_function(original_function):
def wrapper_function():
return
return
@upper_function
def greet(a):
print(a)
greet("good morning")
- Python generator yielding from nested non-generator functionThis is a dumb example based on a more complex thing that I want to do: from typing import Generator def f() -> Generator[list[int], None, None]: result = list() result.append(1) if len(result) == 2: yield result result = list() result.append(2) if len(result) == 2: yield result result = list() result.append(3) if len(result) == 2: yield result result = list() result.append(4) if len(result) == 2: yi
Python generator yielding from nested non-generator function
This is a dumb example based on a more complex thing that I want to do:
from typing import Generator
def f() -> Generator[list[int], None, None]:
result = list()
result.append(1)
if len(result) == 2:
yield result
result = list()
result.append(2)
if len(result) == 2:
yield result
result = list()
result.append(3)
if len(result) == 2:
yield result
result = list()
result.append(4)
if len(result) == 2:
yield result
result = list()
print(list(f()))
The point here is that this bit is copied multiple times:
if len(result) == 2:
yield result
result = list()
Normally, I'd change it into something like this:
from typing import Generator
def f() -> Generator[list[int], None, None]:
def add_one(value: int) -> None:
nonlocal result
result.append(value)
if len(result) == 2:
nonlocal_yield result
result = list()
result = list()
add_one(1)
add_one(2)
add_one(3)
add_one(4)
print(list(f()))
Obviously, nonlocal_yield
is not a thing. Is there an elegant way to achieve this?
I know that I can just create the full list of results, i.e., [[1, 2], [3, 4]]
, and then either return it or yield
individual 2-element sublists. Something like this:
from typing import Generator
def f() -> list[list[int]]:
def add_one(value: int) -> None:
nonlocal current
current.append(value)
if len(current) == 2:
result.append(current)
current = list()
result = list()
current = list()
add_one(1)
add_one(2)
add_one(3)
add_one(4)
return result
print(list(f()))
However, this beats the purpose of a generator. I'll go for it in absence of a better solution, but I'm curious if there is a "pure" generator way to do it.
- Is there any function or package to generate md5crypt string in Oracle?We'd like to encrypt user password in exported file from Oracle, password should be md5crypt string, like $1$salt$hash. --PowerShell Git function example: Get-Md5Crypt('sachiko') $1$gfJ1cxju47$hLcMO7LZyA2Z74yTP.TmW1 I'm just wondering "DBMS_CRYPTO" function can generate MD5 with salt. So far, can't find any good examples though. If there is no appropriate function or package provided by Oracle, we can apply any alternative way.
Is there any function or package to generate md5crypt string in Oracle?
We'd like to encrypt user password in exported file from Oracle, password should be md5crypt string, like $1$salt$hash.
--PowerShell Git function example:
Get-Md5Crypt('sachiko')
$1$gfJ1cxju47$hLcMO7LZyA2Z74yTP.TmW1
I'm just wondering "DBMS_CRYPTO" function can generate MD5 with salt. So far, can't find any good examples though. If there is no appropriate function or package provided by Oracle, we can apply any alternative way.