โŒ

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Using IF THEN function to calculate a rolling number in a dataframe

I am in need of help trying to create a calculated metric. I am trying to create an RSI calculation for a stock dataset. To do so I want to look at the last 14 days and find the average gain for all days where the stock price was up. I then will have to do the same for all days the stock market is down. I am doing these calculations across multiple stocks, so I created a dictionary which I then concatenate. Here is the code:

stocklist=["^SPX", "^DJI"]
d={}
def averageGain14(dailyGain):
    if dailyGain>= 0:
        gain = dailyGain
    return gain
for name in stocklist:
    d[name]= pd.DataFrame()
    data = yf.Ticker(name)
    data = data.history(start=myStart, end=myEnd)
    d[name]= pd.DataFrame(data)
    d[name]["Daily Gain"]=d[name]["Close"].diff()
    d[name]['Average Gain'] = d[name]["Daily Gain"].apply(averageGain14)
    d[name] = d[name].add_prefix(name)
modelData = pd.concat(d.values(), axis=1)

As you can see, I try to define a function for averagegain14 at the top, which is not currently doing anything yet but returning the gain value if the day was up (step 1 of getting this working). In the For loop, I am trying to set the "Average Gain" Column to a calculated field that applies the function to the "Daily Gain" column, but I seem to be running into an error.

I tried a few approaches, but to no avail. First I tried d[name]['Average Gain'] = d[name].rolling(14).mean().where(d[name]['Daily Gain'] >= 0, 0)

That returned an error regarding the Daily Gain value being a list and not a single value. I then tried appending the daily gain call with .values, but that didn't work either. I then tried this approach above that is not working. I think to add complexity, I need this to also be a rolling average based on the last 14 days, so to not only calculate add up the positive days, but to also then find the average gain for those days (know the denominator of how many days were up in the 14 day window). Hopefully this is making sense and someone can point me in the right direction.

LTSpice simulation running via Spicelib raises error when used an outer model

I am trying to use the Spicelib to run an LTSpice simulation in Python. This worked well when I did it with a small built-in circuit, but now currently I would like to make it work with more complex schematics, and it gave me this error:

NotImplementedError                       Traceback (most recent call last)
     32 #Simulator and circuit definition
---> 33 circuit = AscEditor(input_loc) 
     34 LTC = SimRunner(output_folder=output_loc, simulator=LTspice)

spicelib\editor\asc_editor.py:128, in AscEditor.__init__(self, asc_file)
    126     raise FileNotFoundError(f"File {asc_file} not found")
    127 # read the file into memory
--> 128 self.reset_netlist()

spicelib\editor\asc_editor.py:245, in AscEditor.reset_netlist(self, create_blank)
    243         self.sheet = line[len("SHEET "):].strip()
    244     else:
--> 245         raise NotImplementedError("Primitive not supported for ASC file\n" 
    246                                   f'"{line}"')
    247 if component is not None:
    248     assert component.reference is not None, "Component InstName was not given"

NotImplementedError: Primitive not supported for ASC file
"LINE Normal 368 -1088 368 -1088 2

It says there is an error with the primitives and yes, there are two elements that are not found in the LTSpice's library and they are called with the .model command. I think it can't import them but I don't know why.

Do you have any solutions, how can I overcome this?

Thanks

What can I do if I get error in pixela api?

My code and another person's code are exactly the same except the username id or token but mine gives an error: username doesn't exists or token is wrong.

Here is my code:

import requests

USERNAME = "khush69"
TOKEN = "hdlsshhieh4466ohs9w"

pixela_endpoint = "https://pixe.la/v1/users"

user_params = {
    "token": TOKEN,
    "username": USERNAME,
    "agreeTermsOfService": "yes",
    "notMinor": "yes"
}


graph_endoint = f"{pixela_endpoint}/{USERNAME}/graphs"

graph_config = {
    "id": "graph6969",
    "name": "Coding Learning Graph",
    "unit": "hour",
    "type": "float",
    "color": "shibafu"
}

headers = {
    "X-USER-TOKEN":TOKEN
}

response = requests.post(url=graph_endoint, json=graph_config, headers=headers)
print(response.text)

Is my code correct. why I cant add more functions to it?

import os
import traceback
import pandas as pd
from datetime import datetime
import talib
import datetime
import numpy as np
import warnings
import ccxt
import polars as pl
import sys
warnings.filterwarnings('ignore')

pd.set_option('display.float_format', '{:.5f}'.format)
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
Exchange = ccxt.binance()
Amount = 3
Leverage = 20
Fees = 0.16
ShortSaveThreshold = 2000
ALLTimeThreshHold = 0
back_date = '2020-12-01'
InfinityTimestamp = pd.to_datetime('2025-01-01')

ListOFDates = []
while True:
    TotalProfit = []
    AvailableCoins = []
    OverALLYearProfit = []
    TotalCoinsList = []
    TotalDollarsList = []
    TotalResultList = []
    TotalBuyTimestamp = []
    TotalHitTimestamp = []
    TotalBuyAmountList = []
    TotalTradeIdList = []
    TradeTimestampList = []
    TotalTradeNameList = []
    AllCoinsDataFrameList = []
    LiquidationTimestamps = []
    TotalLiquidateTimestamp = []
    if ListOFDates:
        start_date = str(ListOFDates[-1])
    else:
        start_date = '2022-01-12 06:45:00'
    end_date = str(pd.to_datetime(start_date)+datetime.timedelta(days=5))
    def Saver(Coin,TimeFrame):
        try:
            folder_path = f'E:\\BinanceData\\monthlyfutures\\{Coin}\\{TimeFrame}\\'
            merged_data = pd.DataFrame()
            for file in os.listdir(folder_path):
                if file.endswith('.csv'):
                    try:
                        if Coin in file:
                            CurrentDate = pd.to_datetime(f'{file.split("-")[2]}-{file.split("-")[3].replace(".csv", "")}-01')
                            if CurrentDate >= pd.to_datetime(back_date) and CurrentDate <= pd.to_datetime(end_date):
                                file_path = os.path.join(folder_path, file)
                                df = pd.read_csv(file_path, header=None)
                                if df.iloc[0].apply(lambda x: isinstance(x, (int, float))).all():
                                    header_row = None
                                else:
                                    header_row = 0
                                df = pd.read_csv(file_path, header=header_row)
                                df.columns = ['open_time', 'open', 'high', 'low', 'close', 'volume', 'close_time',
                                              'quote_volume',
                                              'count', 'taker_buy_volume', 'taker_buy_quote_volume', 'ignore']
                                df['Timestamp'] = pd.to_datetime(df['open_time'], unit='ms')
                                df = df.drop(columns=['open_time', 'close_time', 'quote_volume', 'count', 'taker_buy_volume',
                                                      'taker_buy_quote_volume', 'ignore'])
                                merged_data = merged_data.append(df, ignore_index=True)
                    except ValueError:
                        pass
            def stochastic(data, k_window, d_window, window):
                min_val = data.rolling(window=window, center=False).min()
                max_val = data.rolling(window=window, center=False).max()
                stoch = ((data - min_val) / (max_val - min_val)) * 100
                K = stoch.rolling(window=k_window, center=False).mean()
                D = K.rolling(window=d_window, center=False).mean()
                return K, D
            def Arranger(List):
                LastAvailableTimestamp = pd.to_datetime('1989-01-01')
                List = List[::-1]
                for i in List:
                    if i != 0:
                        LastAvailableTimestamp = i
                        break
                return pd.to_datetime(LastAvailableTimestamp)
            def TPSLCalculatorLong(Data):
                BuyTimestampList = []
                HitTimestampList = []
                ResultList = []
                DollarsList = []
                TradeNameList = []
                for Timestamp,CurrentPrice in zip(Data['Timestamp'],Data['open']):
                    if Arranger(HitTimestampList) < pd.to_datetime(Timestamp):
                        FutureData = merged_data[merged_data['Timestamp'] >= Timestamp]
                        DipData = FutureData[(FutureData['DipL'] == True) | (FutureData['Timestamp'] == Timestamp)]
                        DipData['BuyPrice'] = DipData['open']
                        TimestampsTrue = []
                        BuyPricePrevious = 0
                        for BuyPrice, TimestampToTake in zip(DipData['BuyPrice'], DipData['Timestamp']):
                            if BuyPricePrevious == 0:
                                BuyPricePrevious = BuyPrice
                                TimestampsTrue.append(TimestampToTake)
                            else:
                                if BuyPricePrevious > BuyPrice:
                                    BuyPricePrevious = BuyPrice
                                    TimestampsTrue.append(TimestampToTake)
                        DipData = DipData[DipData['Timestamp'].isin(TimestampsTrue)]
                        previous_elements = []
                        for index, row in DipData.iterrows():
                            previous_elements_list = DipData.loc[:index, 'BuyPrice'].tolist()
                            previous_elements.append(previous_elements_list)
                        DipData['All_BuyPrices'] = previous_elements
                        DipTimestamps = list(DipData['Timestamp'])
                        NextDipTimestamps = list(DipData['Timestamp'].shift(-1))
                return ResultList,DollarsList,BuyTimestampList,HitTimestampList,TradeNameList
            YearList = []
            for Timestamp in merged_data['Timestamp']:
                YearList.append(Timestamp.year)
            merged_data['EMA3'] = talib.EMA(merged_data['close'],10)
            merged_data['EMA6'] = talib.EMA(merged_data['close'],20)
            merged_data['Year'] = YearList
            merged_data['CrossOverLong'] = (merged_data['Year'] == 2022)&(merged_data['EMA3'].shift(1) > merged_data['EMA6'].shift(1))&(merged_data['EMA3'].shift(2) < merged_data['EMA6'].shift(2))
            merged_data['CrossOverShort'] = (merged_data['Year'] == 2022)&(merged_data['EMA3'].shift(1) < merged_data['EMA6'].shift(1))&(merged_data['EMA3'].shift(2) > merged_data['EMA6'].shift(2))
            merged_data['DipL'] = (merged_data['close'].shift(1) > merged_data['high'].shift(2))
            merged_data['DipH'] = (merged_data['close'].shift(1) < merged_data['low'].shift(2))
            YearList = []
            for Timestamp in merged_data['Timestamp']:
                YearList.append(Timestamp.year)
            merged_data['Year'] = YearList
            merged_data = merged_data[["Timestamp"] + [col for col in merged_data.columns if col != "Timestamp"]]
            merged_data['Coin'] = Coin
            CrossOverDataLong = merged_data[(merged_data['CrossOverLong'] == True)&(merged_data['Timestamp'] >= pd.to_datetime(start_date))&(merged_data['Timestamp'] <= pd.to_datetime(end_date))]
            CrossOverDataLong['PreviousTimestamp'] = CrossOverDataLong['Timestamp'].shift(1)
            CrossOverDataLong['Result'],CrossOverDataLong['Dollars'],CrossOverDataLong['BuyTimestamps'],CrossOverDataLong['HitTimestamp'],CrossOverDataLong['TradeName'] = TPSLCalculatorLong(CrossOverDataLong)

            CrossOverDataShort = merged_data[(merged_data['CrossOverLong'] == True) & (merged_data['Timestamp'] >= pd.to_datetime(start_date)) & (merged_data['Timestamp'] <= pd.to_datetime(end_date))]
            CrossOverDataShort['PreviousTimestamp'] = CrossOverDataShort['Timestamp'].shift(1)
            CrossOverDataShort['Result'], CrossOverDataShort['Dollars'], CrossOverDataShort['BuyTimestamps'], CrossOverDataShort['HitTimestamp'], CrossOverDataShort['TradeName'] = TPSLCalculatorShort(CrossOverDataShort)

            CompleteData = pd.concat([CrossOverDataLong,CrossOverDataShort])
            CrossOverDataLong = CompleteData
            CrossOverDataLong['MaxBuy'] = 1
            CrossOverDataLong = CrossOverDataLong[CrossOverDataLong['Result'] != 'Neutral']
            LongTPS = CrossOverDataLong[CrossOverDataLong['Dollars'] >= 0]
            LongSLS = CrossOverDataLong[CrossOverDataLong['Dollars'] <= 0]
            Pct = []
            for BuyTimestamps in CrossOverDataLong['BuyTimestamps']:
                try:
                    FirstBuyTimestamp = BuyTimestamps[0]
                    LastBuyTimestamp = BuyTimestamps[-1]
                    FirstBuyPrice = merged_data[merged_data['Timestamp'] == FirstBuyTimestamp].iloc[-1]['open']
                    LastBuyPrice = merged_data[merged_data['Timestamp'] == LastBuyTimestamp].iloc[-1]['open']
                    MaxPct = abs(((LastBuyPrice - FirstBuyPrice) * 100) / FirstBuyPrice)
                    Pct.append(MaxPct)
                except Exception as e:
                    Pct.append(0)
            CrossOverDataLong['MaxPct'] = Pct
            TotalProfit.append(sum(CrossOverDataLong['Dollars']))
            print(CrossOverDataLong[['BuyTimestamps','HitTimestamp','Dollars','Result','MaxBuy','TradeName']])
            print(f"Profit || Long:{sum(CrossOverDataLong['Dollars'])} || Coin:{Coin},TotalTP:{len(LongTPS)},TotalSL:{len(LongSLS)},MaxBuys:{max(CrossOverDataLong['MaxBuy'])},MaxPct:{max(CrossOverDataLong['MaxPct'])}")
            AvailableCoins.append(Coin)
            AllCoinsDataFrameList.append(merged_data)
            for Result, Dollars, BuyTimestamps, HitTimestamp,Timestamp,TradeName in zip(CrossOverDataLong['Result'], CrossOverDataLong['Dollars'],CrossOverDataLong['BuyTimestamps'], CrossOverDataLong['HitTimestamp'],CrossOverDataLong['Timestamp'],CrossOverDataLong['TradeName']):
                if TradeName == 'Long':
                    TotalCoinsList.append(Coin)
                    TotalResultList.append(Result)
                    TotalDollarsList.append(Dollars)
                    TotalBuyTimestamp.append(BuyTimestamps)
                    TotalHitTimestamp.append(HitTimestamp)
                    TotalBuyAmountList.append(Amount)
                    TotalTradeIdList.append(f'1 Entry')
                    TradeTimestampList.append(Timestamp)
                    TotalTradeNameList.append(TradeName)
                elif TradeName == 'Short':
                    TotalCoinsList.append(Coin)
                    TotalResultList.append(Result)
                    TotalDollarsList.append(Dollars)
                    TotalBuyTimestamp.append(BuyTimestamps)
                    TotalHitTimestamp.append(HitTimestamp)
                    TotalBuyAmountList.append(Amount)
                    TotalTradeIdList.append(f'1 Entry')
                    TradeTimestampList.append(Timestamp)
                    TotalTradeNameList.append(TradeName)
        except Exception as e:
            traceback.print_exc()
    with open("Future_ALL_Coins.txt", "r") as r:
        # Lines = ['1000SHIBUSDT', '1000XECUSDT', '1INCHUSDT', 'AAVEUSDT', 'ADAUSDT', 'ALGOUSDT', 'ALICEUSDT', 'ALPHAUSDT', 'ANKRUSDT', 'ARPAUSDT', 'ARUSDT', 'ATAUSDT', 'ATOMUSDT', 'AUDIOUSDT', 'AVAXUSDT', 'AXSUSDT', 'BAKEUSDT', 'BALUSDT', 'BANDUSDT', 'BATUSDT', 'BCHUSDT', 'BELUSDT', 'BLZUSDT', 'BNBUSDT', 'BTCUSDT', 'C98USDT', 'CELOUSDT', 'CELRUSDT', 'CHRUSDT', 'CHZUSDT', 'COMPUSDT', 'COTIUSDT', 'CRVUSDT', 'CTKUSDT', 'CTSIUSDT', 'DASHUSDT', 'DEFIUSDT', 'DENTUSDT', 'DGBUSDT', 'DOGEUSDT', 'DOTUSDT', 'EGLDUSDT', 'ENJUSDT', 'EOSUSDT', 'ETCUSDT', 'ETHUSDT', 'FILUSDT', 'FLMUSDT', 'FTMUSDT', 'GALAUSDT', 'GRTUSDT', 'GTCUSDT', 'HBARUSDT', 'HNTUSDT', 'HOTUSDT', 'ICPUSDT', 'ICXUSDT', 'IOSTUSDT', 'IOTAUSDT', 'IOTXUSDT', 'KAVAUSDT', 'KLAYUSDT', 'KNCUSDT', 'KSMUSDT', 'LINAUSDT', 'LINKUSDT', 'LITUSDT', 'LPTUSDT', 'LRCUSDT', 'LTCUSDT', 'MANAUSDT', 'MASKUSDT', 'MATICUSDT', 'MKRUSDT', 'MTLUSDT', 'NEARUSDT', 'NEOUSDT', 'NKNUSDT', 'OCEANUSDT', 'OGNUSDT', 'OMGUSDT', 'ONEUSDT', 'ONTUSDT', 'QTUMUSDT', 'REEFUSDT', 'RENUSDT', 'RLCUSDT', 'RSRUSDT', 'RUNEUSDT', 'RVNUSDT', 'SANDUSDT', 'SFPUSDT', 'SKLUSDT', 'SNXUSDT', 'SOLUSDT', 'STMXUSDT', 'STORJUSDT', 'SUSHIUSDT', 'SXPUSDT', 'THETAUSDT', 'TLMUSDT', 'TOMOUSDT', 'TRBUSDT', 'TRXUSDT', 'UNFIUSDT', 'UNIUSDT', 'VETUSDT', 'WAVESUSDT', 'XEMUSDT', 'XLMUSDT', 'XMRUSDT', 'XRPUSDT', 'XTZUSDT', 'YFIUSDT', 'ZECUSDT', 'ZENUSDT', 'ZILUSDT', 'ZRXUSDT']
        # Lines = ['1000SHIBUSDT', '1000XECUSDT', '1INCHUSDT', 'AAVEUSDT', 'ADAUSDT', 'ALGOUSDT', 'ALICEUSDT', 'ALPHAUSDT', 'ANKRUSDT', 'ARPAUSDT']
        Lines = ['1INCHUSDT']
        ThreadList = []
        for Coin in Lines:
            Coin = Coin.replace('\n', '')
            Saver(Coin,'5m')
    TotalDataFrame = pd.DataFrame({
        'Symbol':TotalCoinsList,
        'Result':TotalResultList,
        'Dollars':TotalDollarsList,
        'BuyTimestamp':TotalBuyTimestamp,
        'HitTimestamp':TotalHitTimestamp,
        'TotalBuyAmount':TotalBuyAmountList,
        'Timestamp':TradeTimestampList,
        'TradeName':TotalTradeNameList,
    })
    TotalDataFrame['HitTimestamp'] = pd.to_datetime(TotalDataFrame['HitTimestamp'])
    TotalDataFrame['BuyTimestamp'] = pd.to_datetime(TotalDataFrame['BuyTimestamp'])

    TotalDataFrame.to_csv("E:\\LongShort(AccountDouble)(TD).csv")
    AllCoinsDataFrame = pd.concat(AllCoinsDataFrameList)
    AllCoinsDataFrame.to_csv("E:\\LongShort(AccountDouble)(ACD).csv")
    AmountLong = 3
    AmountShort = 3
    Leverage = 20
    InfinityTimestamp = '2025-01-01'
    def unrealized_pnl(buy_timestamps,BuyAmountDollars, current_timestamps, coins, all_coins_dataframe, trade_types):
        try:
            all_coins_dataframe = all_coins_dataframe.to_pandas()
            all_coins_dataframe_reset = all_coins_dataframe.reset_index()
            DataCheck = all_coins_dataframe_reset.merge(pd.DataFrame({'Timestamp': buy_timestamps, 'Coin': coins, 'TradeName': trade_types}),on=['Timestamp', 'Coin'])
            indexes_dropped = DataCheck.index
            DataCheck = DataCheck.drop_duplicates()
            indexes_dropped = indexes_dropped.difference(DataCheck.index)
            DataCheckCurrent = all_coins_dataframe_reset.merge(pd.DataFrame({'Timestamp': current_timestamps, 'Coin': coins, 'TradeName': trade_types}),on=['Timestamp', 'Coin'])
            DataCheckCurrent = DataCheckCurrent.drop(indexes_dropped)
            ListNoCandleCoins = list(set(list(DataCheck['Coin'])) - set(list(DataCheckCurrent['Coin'])))
            DataCheck = DataCheck[~DataCheck['Coin'].isin(ListNoCandleCoins)]
            DataCheck.reset_index(inplace=True)
            buy_prices = list(DataCheck['open'])
            buy_prices = np.array(buy_prices)

            Trade_types = list(DataCheck['TradeName'])
            Trade_types = np.array(Trade_types)
            Lowcurrent_prices = list(DataCheckCurrent['low'])
            Lowcurrent_prices = np.array(Lowcurrent_prices)
            Highcurrent_prices = list(DataCheckCurrent['high'])
            Highcurrent_prices = np.array(Highcurrent_prices)

            pctLow = np.abs((Lowcurrent_prices - buy_prices) * 100 / buy_prices)
            pctHigh = np.abs((Highcurrent_prices - buy_prices) * 100 / buy_prices)

            trade_amountLong = AmountLong * Leverage
            trade_amountShort = AmountShort * Leverage

            total_dollars = np.array(np.zeros(len(buy_timestamps)))
            Profit_long_condition = np.where(((Trade_types == 'Long')|(Trade_types == 'LongDouble')) & (Lowcurrent_prices > buy_prices))
            loss_long_condition = np.where(((Trade_types == 'Long')|(Trade_types == 'LongDouble')) & (Lowcurrent_prices <= buy_prices))

            Profit_short_condition = np.where(((Trade_types == 'Short')|(Trade_types == 'ShortDouble')) & (Highcurrent_prices < buy_prices))
            loss_short_condition = np.where(((Trade_types == 'Short')|(Trade_types == 'ShortDouble')) & (Highcurrent_prices >= buy_prices))

            total_dollars[Profit_long_condition] = trade_amountLong
            total_dollars[Profit_long_condition] += total_dollars[Profit_long_condition] * pctLow[Profit_long_condition] / 100
            total_dollars[Profit_long_condition] = total_dollars[Profit_long_condition] - trade_amountLong

            total_dollars[Profit_short_condition] = trade_amountShort
            total_dollars[Profit_short_condition] += total_dollars[Profit_short_condition] * pctHigh[Profit_short_condition] / 100
            total_dollars[Profit_short_condition] = total_dollars[Profit_short_condition] - trade_amountShort

            total_dollars[loss_long_condition] = trade_amountLong
            total_dollars[loss_long_condition] -= total_dollars[loss_long_condition] * pctLow[loss_long_condition] / 100
            total_dollars[loss_long_condition] = total_dollars[loss_long_condition] - trade_amountLong

            total_dollars[loss_short_condition] = trade_amountShort
            total_dollars[loss_short_condition] -= total_dollars[loss_short_condition] * pctHigh[loss_short_condition] / 100
            total_dollars[loss_short_condition] = total_dollars[loss_short_condition] - trade_amountShort

            ####################################################### CLOSE #######################################################

            Closecurrent_prices = list(DataCheckCurrent['close'])
            Closecurrent_prices = np.array(Closecurrent_prices)

            pctClose = np.abs((Closecurrent_prices - buy_prices) * 100 / buy_prices)

            total_dollarsClose = np.array(np.zeros(len(buy_timestamps)))
            Profit_long_condition = np.where(((Trade_types == 'Long')|(Trade_types == 'LongDouble')) & (Closecurrent_prices > buy_prices))
            loss_long_condition = np.where(((Trade_types == 'Long')|(Trade_types == 'LongDouble')) & (Closecurrent_prices <= buy_prices))

            Profit_short_condition = np.where(((Trade_types == 'Short')|(Trade_types == 'ShortDouble')) & (Closecurrent_prices < buy_prices))
            loss_short_condition = np.where(((Trade_types == 'Short')|(Trade_types == 'ShortDouble')) & (Closecurrent_prices >= buy_prices))

            total_dollarsClose[Profit_long_condition] = trade_amountLong
            total_dollarsClose[Profit_long_condition] += total_dollarsClose[Profit_long_condition] * pctClose[Profit_long_condition] / 100
            total_dollarsClose[Profit_long_condition] = total_dollarsClose[Profit_long_condition] - trade_amountLong


            total_dollarsClose[Profit_short_condition] =  trade_amountShort
            total_dollarsClose[Profit_short_condition] += total_dollarsClose[Profit_short_condition] * pctClose[Profit_short_condition] / 100
            total_dollarsClose[Profit_short_condition] = total_dollarsClose[Profit_short_condition] - trade_amountShort

            total_dollarsClose[loss_long_condition] = trade_amountLong
            total_dollarsClose[loss_long_condition] -= total_dollarsClose[loss_long_condition] * pctClose[loss_long_condition] / 100
            total_dollarsClose[loss_long_condition] = total_dollarsClose[loss_long_condition] - trade_amountLong

            total_dollarsClose[loss_short_condition] = trade_amountShort
            total_dollarsClose[loss_short_condition] -= total_dollarsClose[loss_short_condition] * pctClose[loss_short_condition] / 100
            total_dollarsClose[loss_short_condition] = total_dollarsClose[loss_short_condition] - trade_amountShort

        except Exception as e:
            traceback.print_exc()
            return np.zeros(len(buy_timestamps)), np.zeros(len(buy_timestamps))
        return total_dollars, total_dollarsClose
    def process_intervals(interval_chunk, TotalDataFrame, AllCoinsDataFrame):
        portfoilio = []
        UnrealizedMax = []
        MaxTotalDollars = []
        total_dollars = 3000
        HittedSymbols = []
        SymbolListFirst = []
        for candle_time in interval_chunk:
            buy_data = TotalDataFrame.filter(TotalDataFrame['BuyTimestamp'] == pd.to_datetime(candle_time))
            buy_data = buy_data.to_pandas()
            buy_data = buy_data[~(buy_data['Symbol'].isin(HittedSymbols))]
            hit_data = TotalDataFrame.filter(TotalDataFrame['HitTimestamp'] == pd.to_datetime(candle_time))
            not_hit_data = TotalDataFrame.filter((TotalDataFrame['BuyTimestamp'] <= pd.to_datetime(candle_time)) & (TotalDataFrame['HitTimestamp'] > pd.to_datetime(candle_time)) & (TotalDataFrame['HitTimestamp'] != pd.to_datetime(candle_time)))
            not_hit_data = not_hit_data.to_pandas()
            not_hit_data = not_hit_data[~not_hit_data['Symbol'].isin(HittedSymbols)]
            hit_data = hit_data.to_pandas()
            if not SymbolListFirst:
                for Symbol in buy_data['Symbol']:
                    SymbolListFirst.append(Symbol)
            else:
                buy_data = buy_data[buy_data['Symbol'].isin(SymbolListFirst)]
                hit_data = hit_data[hit_data['Symbol'].isin(SymbolListFirst)]
                not_hit_data = not_hit_data[not_hit_data['Symbol'].isin(SymbolListFirst)]
            not_hit_data['candle_time'] = candle_time
            not_hit_data['Fees'] = (not_hit_data['TotalBuyAmount']*Leverage)
            not_hit_data['Fees'] -= not_hit_data['Fees'] * 0.16 / 100
            not_hit_data['Fees'] = not_hit_data['Fees'] - (not_hit_data['TotalBuyAmount']*Leverage)
            not_hit_data = not_hit_data.sort_values(by='Symbol')
            print(f"InTradeSymbols:{SymbolListFirst}")
            HitList = []
            ChunkList = []
            if not not_hit_data.empty:
                not_hit_data['UnrealizedPnlHL'], not_hit_data['UnrealizedPnlClose'] = unrealized_pnl(not_hit_data['BuyTimestamp'],not_hit_data['TotalBuyAmount'], not_hit_data['candle_time'], not_hit_data['Symbol'],AllCoinsDataFrame, not_hit_data['TradeName'])
                not_hit_data['UnrealizedPnlHL'] = not_hit_data['Fees']+not_hit_data['UnrealizedPnlHL']
                not_hit_data['UnrealizedPnlClose'] = not_hit_data['Fees']+not_hit_data['UnrealizedPnlClose']
                not_hit_dataChunks = [group for _, group in not_hit_data.groupby('Symbol')]
                for chunk in not_hit_dataChunks:
                    if sum(chunk['UnrealizedPnlClose']) > 0.02:
                        HitList.append(chunk.iloc[-1]['Symbol'])
                        ChunkList.append(chunk)
                        HittedSymbols.append(chunk.iloc[-1]['Symbol'])
                        if chunk.iloc[-1]['Symbol'] in SymbolListFirst:
                            SymbolListFirst.remove(chunk.iloc[-1]['Symbol'])
                not_hit_data = not_hit_data[~not_hit_data['Symbol'].isin(HitList)]
                total_unrealized_pnl = sum(not_hit_data['UnrealizedPnlHL'])
                total_unrealized_pnlClose = sum(not_hit_data['UnrealizedPnlClose'])
            else:
                total_unrealized_pnl = 0
                total_unrealized_pnlClose = 0
            for chunk in ChunkList:
                chunk['Dollars'] = chunk['UnrealizedPnlClose']
                chunk['HitTimestamp'] = candle_time
                hit_data = pd.concat([hit_data, chunk[['', 'Symbol', 'Result', 'Dollars', 'BuyTimestamp', 'HitTimestamp', 'TotalBuyAmount', 'Timestamp','TradeName']]], axis=0)
                for BuyTimestamp in chunk['BuyTimestamp']:
                    TotalDataFrame = TotalDataFrame.with_columns((pl.when((pl.col('Symbol') == chunk.iloc[-1]['Symbol'])&(pl.col('BuyTimestamp') == BuyTimestamp)).then(pl.lit(candle_time)).otherwise(pl.col('HitTimestamp'))).alias('HitTimestamp'))
            total_dollars = total_dollars - sum(buy_data['TotalBuyAmount'])
            TotalDollarsHit = 0
            Profit = 0
            for Dollar, TotalBuyAmount in zip(hit_data['Dollars'], hit_data['TotalBuyAmount']):
                Profit += Dollar
                TotalDollarsHit += TotalBuyAmount
            total_dollars = total_dollars + TotalDollarsHit
            total_dollars = total_dollars + Profit
            WalletOverviewHL = total_dollars + sum(not_hit_data['TotalBuyAmount']) + total_unrealized_pnl
            WalletOverviewClose = total_dollars + sum(not_hit_data['TotalBuyAmount']) + total_unrealized_pnlClose
            if total_unrealized_pnl > 0:
                AvailableBalance = total_dollars
                AvailableBalanceClose = total_dollars
            else:
                AvailableBalance = total_dollars + total_unrealized_pnl
                AvailableBalanceClose = total_dollars + total_unrealized_pnlClose
            print(f" (High-Low) || Timestamp:{candle_time},WalletOverview:{WalletOverviewHL},AvailableBalance:{AvailableBalance},UPNL:{total_unrealized_pnl},InTradeDollars:{sum(not_hit_data['TotalBuyAmount'])}")
            print(f" (Close) || (Timestamp):{candle_time},(WalletOverview):({WalletOverviewClose}),(AvailableBalance):({AvailableBalanceClose}),(UPNL):({total_unrealized_pnlClose}),(InTradeDollars):({sum(not_hit_data['TotalBuyAmount'])})")
            portfoilio.append(AvailableBalance)
            UnrealizedMax.append(total_unrealized_pnl)
            MaxTotalDollars.append(sum(not_hit_data['TotalBuyAmount']))
            if WalletOverviewClose > 3000:
                print(f'CloseTime = {candle_time} || Profit = {WalletOverviewClose-3000} || MinPortfolio = {min(portfoilio)} || UnrealizedPnlMax = {min(UnrealizedMax)} || Duration:{candle_time-pd.to_datetime(start_date)}')
                restorePoint = sys.stdout
                sys.stdout = sys.stdout
                sys.stdout = open("LongShort(AccountDouble).txt", "a")
                print(f'CloseTime = {candle_time} || Profit = {WalletOverviewClose-3000} || MinPortfolio = {min(portfoilio)} || UnrealizedPnlMax = {min(UnrealizedMax)} Duration:{candle_time-pd.to_datetime(start_date)}')
                sys.stdout.close()
                sys.stdout = restorePoint
                ListOFDates.append(candle_time+datetime.timedelta(minutes=5))
                break
    if __name__ == "__main__":
        TotalDataFrame = pl.read_csv("E:\\LongShort(AccountDouble)(TD).csv", try_parse_dates=True)
        AllCoinsDataFrame = pl.read_csv("E:\\LongShort(AccountDouble)(ACD).csv", try_parse_dates=True)
        AllCoinsDataFrame = AllCoinsDataFrame.select(
            pl.col("Timestamp"),
            pl.col("Coin"),
            pl.col("open").cast(pl.Float32),
            pl.col("high").cast(pl.Float32),
            pl.col("low").cast(pl.Float32),
            pl.col("close").cast(pl.Float32),
            pl.col("volume").cast(pl.Float32),
        )
        TotalDataFrame = TotalDataFrame.sort('BuyTimestamp')
        TotalDataFrame = TotalDataFrame.drop('index')
        four_hour_intervals = pd.date_range(start=start_date, end=str(pd.to_datetime(start_date)+datetime.timedelta(days=500)), freq='5T')
        process_intervals(four_hour_intervals, TotalDataFrame, AllCoinsDataFrame)

I Want to exit the coin when it is in profit of 0.02 dollars....

But iam trying from since 1day as iam a beginner in python i dont know how to write the code well.

Please guide me how to do this.

Acutually i am calculating the unrealized pnl of my strategy.

Is my code too complicated? have i wrote the correct code.

Or i have to more improve my coding python

When exactly call operator on type object creates another type object?

I'm noob in python and trying to get some details. So I was reading Python Types and Objects by Shalabh Chaturvedi , when the whole small book made sense to me I encountered this para.

There are two kinds of objects in Python:

  1. Type objects - can create instances, can be subclassed.

  2. Non-type objects - cannot create instances, cannot be subclassed.

...

To create a new object using subclassing, we use the class statement and specify the bases (and, optionally, the type) of the new object. This always creates a type object.

To create a new object using instantiation, we use the call operator (()) on the type object we want to use. This may create a type or a non-type object, depending on which type object was used.

last para with italicized text totally eluded me. I thought calling someTypeObject() always created instance of type object.

I Googled and tried ChatGPT but couldn't get proper answer

Passing pointer to C struct to ioctl in Python

I am writing a Python script that needs to use the FS_IOC_ADD_ENCRYPTION_KEY ioctl.

This ioctl expects an argument of type (pointer to) fscrypt_add_key_arg, which has this definition in the Linux kernel header files:

struct fscrypt_add_key_arg {
    struct fscrypt_key_specifier key_spec;
    __u32 raw_size;
    __u32 key_id;
    __u32 __reserved[8];
    __u8 raw[];
};

#define FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR        1
#define FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER        2
#define FSCRYPT_KEY_DESCRIPTOR_SIZE             8
#define FSCRYPT_KEY_IDENTIFIER_SIZE             16

struct fscrypt_key_specifier {
    __u32 type;     /* one of FSCRYPT_KEY_SPEC_TYPE_* */
    __u32 __reserved;
    union {
            __u8 __reserved[32]; /* reserve some extra space */
            __u8 descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
            __u8 identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
    } u;
};

This is my Python code:

import fcntl
import struct

FS_IOC_ADD_ENCRYPTION_KEY = 0xc0506617

policy_data.key_descriptor: str = get_key_descriptor()
policy_key: bytes = get_policy_key(policy_data)

fscrypt_key_specifier = struct.pack(
    'II16s',
    0x2,
    0,
    bytes.fromhex(policy_data.key_descriptor)
)

fscrypt_add_key_arg = struct.pack(
    f'{len(fscrypt_key_specifier)}sII8I{len(policy_key)}s',
    fscrypt_key_specifier,
    len(policy_key),
    0,
    0, 0, 0, 0, 0, 0, 0, 0,
    policy_key
)

fd = os.open('/mnt/external', os.O_RDONLY)
res = fcntl.ioctl(fd, FS_IOC_ADD_ENCRYPTION_KEY, fscrypt_add_key_arg)

print(res)

When I execute this code I get an OSError:

Traceback (most recent call last):
  File "/home/foo/fscryptdump/./main.py", line 101, in <module>
    res = fcntl.ioctl(fd, FS_IOC_ADD_ENCRYPTION_KEY, fscrypt_add_key_arg)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [Errno 22] Invalid argument

I have double-checked the documentation of that ioctl and I think the values I am passing are correct, but there probably is a problem in the way they are packed.

How can I solve this?

How do I rotate a widget in kivy with the mouse

I'm trying to create a rectangle in kivy that follows my mouse so that the top (short side) of the rectangle is always perpendicular to the line between the rectangle and my mouse.

I have not been able to try anything since I know not very much about Kivy.

Full Code:

import kivy
from kivy.app import App
from kivy.uix.label import Label
from kivy.graphics import Color, Rectangle
from kivy.uix.gridlayout import GridLayout
from kivy.uix.button import Button
from kivy.uix.widget import Widget
from kivy.properties import ObjectProperty
from kivy.lang import Builder
from kivy.core.window import Window
from kivy.core.audio import SoundLoader
from kivy.uix.screenmanager import ScreenManager, Screen
from kivy.uix.image import Image
from kivy.config import Config 
from kivy.graphics import *
from kivy.utils import *

class Myapp(Widget):
    Config.set('graphics', 'resizable', True) 
    def __init__(self, **kwargs):
        super(Myapp, self).__init__(**kwargs)
        
    sound = SoundLoader.load('theme.mp3')
    sound.play()

class MenuScreen(Screen):
    pass

class Game(Screen, Widget):
    def __init__(self, **kwargs):
        super(Game, self).__init__(**kwargs)

        Config.set('graphics', 'resizable', True) 

        with self.canvas:
            Color(0, 0, 0)
            Rectangle(pos=(10, 10), size=(150, 50))

class Myapp(ScreenManager):
    pass

class MyApp(App):
    def build(self):
        return Myapp()

if __name__ == '__main__':
    MyApp().run()

Full description of what I am trying to do: After an initial starting screen, I click the start button to go to the main place. In this place, for now, I need to have a rectangle that follows the movement of my mouse.

Determining Big O Notation of based on python function

I have a school assignment where I need to determine the Big O notations of two functions. The problem is we have had no real courses on Big O, let alone Python. Could someone explain how to determine the big-O, given these functions? Thanks!


def my_func1(inputs):
    n = len(inputs)
    result = 0
    for i in range(n):
        j = 1
        while j < n:
            result += inputs[i] * inputs[j]
            j *= 2
    return result

def my_func2(inputs):
    n = len(inputs)
    for i in range(n - 1):
        for j in range(n - i - 1):
            if inputs[j] > inputs[j + 1]:
                tmp = inputs[j]
                inputs[j] = inputs[j + 1]
                inputs[j + 1] = tmp


Problems in Python

I'm applying a Proportional-Derivative controller to control my robot, but I don't understand why the robot's position doesn't reach the desired position. Can someone help me understand what the error might be?

I want to observe the desired position (q_des) relative to the real position (q) The error is related to the norm between the desired position q_des at the final time and the position obtained by my robot. Essentially, I want to provide my robot's joint torques with a PD control law to calculate tau: tau = Kp * (q_des - q) - Kd * dq, where q_des is the desired joint position, q is the actual position, dq is the actual joint velocity, and Kp and Kd are gains that you need to set. Where the accuracy of the simulation is evaluated by varying the time step dt of the simulator: q_next = q + dt * dq dq_next = dq + dt * ddq

from adam.casadi.computations import KinDynComputations
from adam.geometry import utils
import numpy as np
import casadi as cs
#import icub_models
from math import sqrt

urdf_path =  "/Users/tommasoandina/Desktop/doosan-robot2-master/dsr_description2/urdf/h2515.blue.urdf" 
# The joint list
joints_name_list = ['joint1', 'joint2', 'joint3', 'joint4', 'joint5', 'joint6']
# Specify the root link
root_link = 'base'

kinDyn = KinDynComputations(urdf_path, joints_name_list, root_link)
num_dof = kinDyn.NDoF

H = cs.SX.sym('H', 4, 4)
# The joint values
s = cs.SX.sym('s', num_dof)
# The base velocity
v_b = cs.SX.sym('v_b', 6)
# The joints velocity
s_dot = cs.SX.sym('s_dot', num_dof)
# The base acceleration
v_b_dot = cs.SX.sym('v_b_dot', 6)
# The joints acceleration
s_ddot = cs.SX.sym('s_ddot', num_dof)

# initialize
mass_matrix_fun = kinDyn.mass_matrix_fun()
coriolis_term_fun = kinDyn.coriolis_term_fun()
gravity_term_fun = kinDyn.gravity_term_fun()
bias_force_fun = kinDyn.bias_force_fun()
Jacobian_fun = kinDyn.jacobian_fun("link6")

class Controller:
    def __init__(self, kp, kd, dt, q_des):
        self.q_previous = 0.0
        self.kp = kp
        self.kd = kd
        self.dt = dt
        self.q_des = q_des
        self.first_iter = True

    def control(self, q, dq):
        if self.first_iter:
            self.q_previous = q
            self.first_iter = False

        self.q_previous = q
        return self.kp * (self.q_des - q) - self.kd * dq


   
class Simulator:
    def __init__(self, q, dt, dq, ddq):
        self.q = q
        self.dt = dt
        self.dq = dq
        self.ddq = ddq

    def simulate_q(self, tau, h2):
        dq = self.simulate_dq(tau, h2)
        self.q += self.dt * dq
        return self.q
    
    def simulate_dq(self, tau, h2):
        self.ddq = cs.inv(M2) @ (tau - h2)
        self.dq += self.dt * self.ddq
        return self.dq
    
    def simulate_ddq(self, M2, tau, h2):
        self.ddq = cs.inv(M2) @ (tau - h2)
        return self.ddq


#Valori randomici
q_des = (np.random.rand(num_dof) - 0.5) * 5
xyz = (np.random.rand(3) - 0.5) * 5
rpy = (np.random.rand(3) - 0.5) * 5
H_b = utils.H_from_Pos_RPY(xyz, rpy)
v_b = (np.random.rand(6) - 0.5) * 5
s = (np.random.rand(len(joints_name_list)) - 0.5) * 5
s_dot = (np.random.rand(len(joints_name_list)) - 0.5) * 5





M = kinDyn.mass_matrix_fun()
M2 = cs.DM(M(H_b, s))
M2 = M2[:6, :6]

h = kinDyn.bias_force_fun()
h2 = cs.DM(h(H_b, s, v_b, s_dot))
h2 = h2[:6]


q_0 = np.zeros(num_dof)
#q_0 = cs.SX.sym('q_0', num_dof)
kp = 0.1 
kd = sqrt(kp)
dt = 1.0 / 16.0 * 1e-3
total_time = 2.0 * 1e-3

#dq = cs.SX.sym('dq', num_dof)
#ddq = cs.SX.sym('ddq', num_dof)

dq = np.zeros(num_dof)
ddq = np.zeros(num_dof)


N = int(total_time / dt)

ctrl = Controller(kp, kd, dt, q_des)
simu = Simulator(q_0, dt, dq, ddq)

for i in range(N):
    tau = ctrl.control(simu.q, simu.dq)
    simu.simulate_q(tau, h2)
    simu.simulate_ddq(M2, tau, h2)

q_des_np = cs.DM(q_des).full().flatten()
simu_q_np = cs.DM(simu.q).full().flatten()

# Calcola l'errore medio all'infinito tra i vettori NumPy
errore_medio_infinito = np.max(np.abs(q_des_np - simu_q_np))

print(q_des_np)
print(simu_q_np)

print("Errore medio all'infinito:", errore_medio_infinito)

Why won't my home page redirect to a detail view (Django)

So the context is I'm following a tutorial on codemy.com. Somewhere before Django 5.0 I lost my "magic", the tutorial was written for 3.8 or 4.X maybe. I am showing a function based view although I have tried the class base view as suggested on the codemy youtube. The reason I chose function view is it was easier for me to debug.

views.py

 from django.shortcuts import render
 from django.views.generic import ListView #DetailView
 from django.http import HttpResponse
 from .models import Post


 class Home(ListView):
     model = Post
     template_name = "home.html"


 def articleDetail(request, pk):
     try:
         obj = Post.objects.get(pk=pk)
         return render(request, "article_detail.html", {object, obj})
     except Post.DoesNotExist:
         print("Object number: " + str(pk) + " not found")
         return HttpResponse("Object number: " + str(pk) + " not found")

the model

 from django.db import models
 from django.contrib.auth.models import User


 class Post(models.Model):
     title = models.CharField(max_length=255)
     author = models.ForeignKey(User, on_delete=models.CASCADE)
     body = models.TextField()

     def __str__(self):
         return str(self.title) + ' by: ' + str(self.author)

the urls file

 from django.urls import path,include
 from .views import Home, articleDetail

 urlpatterns = [
     path('', Home.as_view(), name="home"),
     path('article/<int:pk>', articleDetail,name="article-detail"),
         ]

the template for home, works fine until redirect

 <!DOCTYPE html>
 <html lang="en">
 <head>
     <meta charset="UTF-8">
     <title>Landing!!!</title>
     <h1> Home Pageeeeee</h1>
 </head>
 <body>
 <ul>
     <h2>Articles</h2>
     {% for article in object_list  %}
     <li>
         <a href="{$ url 'article-detail' article.pk %}">{{article.title}}</a>
         <br/>
         By: {{article.author}}
         <p>
             {{article.body}}
         </p>
     </li>
     {% endfor %}
 </ul>
 </body>
 </html>

I think my error is either how I'm passing the primary key to look up the object or how I'm asking the URL file to locate the document

Elastic Bean Stalk API's issue

I have my environment running on elastic bean stalk and its deployed sucessfully also have enabled https on the environment and link is working fine but the issue is: I have my python backend hosted using elastic bean stalk but when accessing the API's only the root i.e. / is working rest all the API's i.e. /get_details for example is not working.

Any help would be appreciated, Thanks.

Tried testing on local, checked DynamoDB settings etc, its all good.

How to make my dictionary thread safe in python?

Python beginner. I have a class as follows, which maintains simple dictionary for historical records. It needs to support insertion of records and lookup, i.e. given ID and year, returns all records older than given year.

A request is to make it thread safe. Could any one give me some suggestion about how to make the dictionary thread safe, assuming there will be lots of threads calling record and find_history at the same time?

class Records:
    def __init__(self):
        self.map = {} # ID -> [year, location]

    def record(self, ID, year, location):
        if ID not in self.map:
            self.map[ID] = []
        self.map[ID].append([year, location])

    def find_history(self, ID, year):
        if ID not in self.map:
            return []
        results = []
        for record in self.map[ID]:
            if record[0] <= year:
                results.append(record)
        return results

Thanks a lot!

I tried reading python multithreading, still no clue. Especially, how to implement a write-read lock in python, so that when there are lots of read, they can be done currently without blocking each other

my code didn't give me correct output (python)

I make a program to automate check yaml files with some lint rule, but the output is not what I expected.

This is my testing environment:

The file that needs to be checked: yaml file: it's a Single file with multiple yamls, that separated by โ€œ---โ€, other yaml file with single yaml, and other yaml file that nested in the folder directory.

Input: input () is used three times to get the path of the YAML file or directory, the key string to be searched, and the value to be matched against the key.

The user's input is then stored in the directory, key, and value variables, respectively. example input:

directory = input("Enter the path: ") yamls

key = input("Enter the key string : ") metadata.name[*]

value = input("Enter the value : ") service1

Output:

So, each line is telling you that the script found a key-value pair in a specific YAML file that matches the key and value you were searching for

The output is a series of print statements from the Python script. Each line represents a match found in the YAML files that were searched.

Example output:

key = metadata.name[0], value = service1, ..\..\yamls\6.yaml

key = metadata.name[1], value = service1, ..\..\yamls\6.yaml

key = metadata.name[0], value = service1, ..\..\yamls\7.yaml

key = metadata.name[1], value = service1, ..\..\yamls\7.yaml

key = metadata.name[2], value = service1, ..\..\yamls\7.yaml

This is the key that was found in the YAML file. It's a path to a specific value in the YAML structure. In this case, it's the name of the first name specified in the metadata section.

value = service1: This is the value associated with the key in the YAML file.

....\\yamls\\6.yaml: This is the relative path to the YAML file where the match was found.

Here is the yaml file example that have multiple yamls:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment1
spec:
  replicas: 3
  selector:
    matchLabels:
      app: app1
  template:
    metadata:
      labels:
        app: app1
    spec:
      containers:
      - name: app1
        image: my-image:1.0
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: service1
spec:
  ports:
  - name: http
    port: 8080
    targetPort: 8080
    protocol: TCP
  selector:
    app: app1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment2
spec:
  replicas: 2
  selector:
    matchLabels:
      app: app2
  template:
    metadata:
      labels:
        app: app2
    spec:
      containers:
      - name: app2
        image: my-image:2.0
        ports:
        - containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
  name: service1
spec:
  ports:
  - name: http
    port: 8081
    targetPort: 8081
    protocol: TCP
  selector:
    app: app2

This is my code,

import os
from ruamel.yaml import YAML
from jsonpath_ng import parse

def check_yaml(file_path, key_pattern, value_pattern, base_dir):
    yaml = YAML(typ='safe')
    try:
        with open(file_path, 'r') as stream:
            all_data = list(yaml.load_all(stream))
            for data in all_data:
                jsonpath_expr = parse(key_pattern)
                matches = [match for match in jsonpath_expr.find(data)]
                for match in matches:
                    if str(match.value) == value_pattern:
                        relative_path = os.path.relpath(file_path, start=base_dir)
                        # Convert the Path object to a string
                        path_str = str(match.full_path)
                        # Extract the index from the string
                        index = path_str.split('[')[-1].split(']')[0]
                        # Replace '*' with the index
                        modified_key_pattern = key_pattern.replace('*', str(index), 1)
                        print(f"key = {modified_key_pattern}, value = {match.value}, {relative_path}")
    except Exception as e:
        print(f"Error checking {file_path}: {e}")

def flatten_dict(d, parent_key='', sep='.'):
    items = []
    for k, v in d.items():
        new_key = f"{parent_key}{sep}{k}" if parent_key else k
        if isinstance(v, dict):
            items.extend(flatten_dict(v, new_key, sep=sep).items())
        elif isinstance(v, list):
            for i, item in enumerate(v):
                if isinstance(item, dict):
                    items.extend(flatten_dict(item, f"{new_key}[{i}]", sep=sep).items())
                else:
                    items.append((f"{new_key}[{i}]", item))
        else:
            items.append((new_key, v))
    return dict(items)

def main():
    directory = input("Enter the path of the YAML file or directory which has YAML files: ")
    key = input("Enter the key string to be searched in YAML files: ")
    value = input("Enter the value to be matched against the key: ")

    if os.path.isdir(directory):
        for root, dirs, files in os.walk(directory):
            for file in files:
                if file.endswith(".yaml"):
                    check_yaml(os.path.join(root, file), key, value, root)
    elif os.path.isfile(directory):
        check_yaml(directory, key, value)

if __name__ == "__main__":
    main()

This is the example output it gives

key = metadata.name[0], value = service1, ..\..\yamls\5.yaml

key = metadata.name[0], value = service1, ..\..\yamls\6.yaml

key = metadata.name[0], value = service1, ..\..\yamls\6.yaml

key = metadata.name[0], value = service1, ..\..\yamls\7.yaml

key = metadata.name[0], value = service1, ..\..\yamls\7.yaml

key = metadata.name[0], value = service1, ..\..\yamls\7.yaml

this is my expectation

key = metadata.name[0], value = service1, ..\..\yamls\5.yaml

key = metadata.name[0], value = service1, ..\..\yamls\6.yaml

key = metadata.name[1], value = service1, ..\..\yamls\6.yaml

key = metadata.name[0], value = service1, ..\..\yamls\7.yaml

key = metadata.name[1], value = service1, ..\..\yamls\7.yaml

key = metadata.name[2], value = service1, ..\..\yamls\7.yaml

Here is my thought:

The issue might be that the * in the key_pattern is not being replaced because it's not found. The str.replace() function only replaces the first occurrence of the old substring with the new substring. If the * is not found in the key_pattern, the function will not do anything.

What should I change? Thank you in advance for your help

How do I add multiple tuples together in a list of tuples [closed]

Everything works except for the last part of the code. I cannot figure out how to add the tuples together from the list of computer_parts once they user is done shoping. I have this updated version:

# ----------------------CHALLENGE WITH TUPLES---------------------------
# Make all the available_parts tuples instead of strings
# Each tuple would contain the name of the part, and its price
# you could then display the prices, so that the user knows how much
# they're spending. When they've chosen all their parts, print out the
# total price
available_parts = [("computer", 400),
                   ("monitor", 250),
                   ("keyboard", 75),
                   ("mouse", 25),
                   ("mouse mat", 10),
                   ("hdmi cable", 15),
                   ("dvd drive", 25),
                   ("camera", 120),
                   ]

# valid_choices = [str(i) for i in range(1, len(available_parts) + 1)]
valid_choices = []

for i in range(1, len(available_parts) + 1):
    valid_choices.append(str(i))
print(valid_choices)
current_choice = "-"
computer_parts = []     # Create an empty list

while current_choice != '0':
    if current_choice in valid_choices:
        index = int(current_choice) - 1
        chosen_part = available_parts[index]
        if chosen_part in computer_parts:
            # it's already in, so remove it
            print("Removing {}".format(current_choice))
            computer_parts.remove(chosen_part)
        else:
            print("Adding {}".format(current_choice))
            computer_parts.append(chosen_part)
        print("Your list now contains: {}".format(computer_parts))
    else:
        print("Please add options from the list below: ")
        for number, part in enumerate(available_parts):
            print("{0}: {1}".format(number + 1, part))

    current_choice = input()

print("Your list has: {}".format(computer_parts))

for index, (part, price) in enumerate(computer_parts):
    print("{}: {}".format(part, price))

as well as this version too:

# ----------------------CHALLENGE WITH TUPLES---------------------------
# Make all the available_parts tuples instead of strings
# Each tuple would contain the name of the part, and its price
# you could then display the prices, so that the user knows how much
# they're spending. When they've chosen all their parts, print out the
# total price
available_parts = [("computer", 400),
                   ("monitor", 250),
                   ("keyboard", 75),
                   ("mouse", 25),
                   ("mouse mat", 10),
                   ("hdmi cable", 15),
                   ("dvd drive", 25),
                   ("camera", 120),
                   ]

# valid_choices = [str(i) for i in range(1, len(available_parts) + 1)]
valid_choices = []

for i in range(1, len(available_parts) + 1):
    valid_choices.append(str(i))
print(valid_choices)
current_choice = "-"
computer_parts = []     # Create an empty list

while current_choice != '0':
    if current_choice in valid_choices:
        index = int(current_choice) - 1
        chosen_part = available_parts[index]
        if chosen_part in computer_parts:
            # it's already in, so remove it
            print("Removing {}".format(current_choice))
            computer_parts.remove(chosen_part)
        else:
            print("Adding {}".format(current_choice))
            computer_parts.append(chosen_part)
        print("Your list now contains: {}".format(computer_parts))
    else:
        print("Please add options from the list below: ")
        for number, part in enumerate(available_parts):
            print("{0}: {1}".format(number + 1, part))

    current_choice = input()

print("Your list has: {}".format(computer_parts))

for part, price in computer_parts:
    print("Total: {}".format(price))

I have tried a few things, like adding part with part and all that does is double the price of one thing in the loop instead of add all the prices in the list of tuples.

This is meant to give a total price of all the products in the list after the program is done running(aka when the shopper is done shopping) I want to add the index position [1] of all the items in the list which is the prices of the item together for a total price.

How to get ScheduledEventLocation with discord pycord

I'm having some difficulties understanding how to retrieve the location from a ScheduledEvent.

Specifically, I want to get the name of the voice channel where a ScheduledEvent will be in (if the location is indeed a voice channel).

My problem is that I don't understand the documentation and how the value and type attributes of ScheduledEventLocation work. Docs: https://docs.pycord.dev/en/stable/_modules/discord/scheduled_events.html#ScheduledEventLocation https://docs.pycord.dev/en/stable/api/models.html#discord.ScheduledEventLocation

Here's the code:

    for event in guild.scheduled_events:
        if hasattr(event, 'location') and hasattr(event, 'start_time'):
            location = event.location
            start_time = event.start_time
            
            delta_days = int((start_time.timestamp() - datetime.now().timestamp())/60/60/24)

            if 0 <= delta_days < 14:
                channel_name = location.value.name
                ...

Thanks for the help!

Django crispy-forms TemplateDoesNotExist

I am new in Django, so I was trying structures in the book "Django for Beginners by William S Wincent" about how to attach crispy forms to my signup page!

However, in the middle of my progress in the topic, I faced to TemplateDoesNotExist exception! Error: Error description

Here is where the error raises: Error description

And here is my settings.py configuration:

...

INSTALLED_APPS = [
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',
    'crispy_forms',
    "accounts",
    "pages",
]
CRISPY_TEMPLATE_PACK = "bootstrap4"
...

django = 4.2.3 django-crispy-forms = 2.0

I've tried to create a Sign Up Page, configuring its views and urls properly to host crispy_forms in my project.

Also crispy_forms(Version 2.0) is installed. Packages installed in my virtual environment

OSError: [WinError 740] The requested operation requires elevation

I am having a simple code which has an image called "try.png" and I want to convert it from Image to Text using pytesseract but I am having some issues with the code.

import cv2
import pytesseract
pytesseract.pytesseract.tesseract_cmd=r'tesseract-ocr-setup-4.00.00dev.exe'
img = cv2.imread('try.png')
img= cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
print(pytesseract.image_to_string(img))

But it's giving me an error.

Traceback (most recent call last):
  File "C:/Users/user 1/PycharmProjects/JARVIS/try.py", line 6, in <module>
    print(pytesseract.image_to_string(img))
  File "C:\Users\user 1\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pytesseract\pytesseract.py", line 356, in image_to_string
    return {
  File "C:\Users\user 1\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pytesseract\pytesseract.py", line 359, in <lambda>
    Output.STRING: lambda: run_and_get_output(*args),
  File "C:\Users\user 1\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pytesseract\pytesseract.py", line 270, in run_and_get_output
    run_tesseract(**kwargs)
  File "C:\Users\user 1\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pytesseract\pytesseract.py", line 241, in run_tesseract
    raise e
  File "C:\Users\user 1\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pytesseract\pytesseract.py", line 238, in run_tesseract
    proc = subprocess.Popen(cmd_args, **subprocess_args())
  File "C:\Users\user 1\AppData\Local\Programs\Python\Python38-32\lib\subprocess.py", line 854, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "C:\Users\user 1\AppData\Local\Programs\Python\Python38-32\lib\subprocess.py", line 1307, in _execute_child
    hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
OSError: [WinError 740] The requested operation requires elevation

Process finished with exit code 1

Any idea on how to over come this error

How to close a python program anytime a key is pressed?

I was trying to make a bot that farms microsoft points and i wanted the macro to stop anytime i pressed the "Esc" key I was using keyboard module to check if esc key was being pressed and close the program if the key was being pressed, but i was also using the time.sleep() function to wait some time for the weppage to load so that i can get points, but I found that if i press "Esc" while the program is on the time.sleep() part then the program doesnt end. Please help.

import webbrowser,time,random,pyautogui,requests,keyboard
def close():
    with pyautogui.hold("Ctrl"):
            pyautogui.press("w")

def generate():
    with open("wow","r",encoding="utf-8") as f:
        p = eval(f.read())
        search = random.choice(p)+"+"+random.choice(p)+"+"+random.choice(p)
    with open("wow1","r",encoding="utf-8") as f:
        p = eval(f.read())
        c = random.choice(p)
        url = c.replace(" ",search)
    return url
def links():
    while ():
        url = generate()
        webbrowser.open(url)
        webbrowser.open("https://rewards.bing.com/pointsbreakdown")
        requests.get("https://rewards.bing.com/pointsbreakdown")
        ''' try:
                no = requests.get(url=points)
        except Exception as err:
            if type(err) == requests.exceptions.ReadTimeout:
                print("Internet connection issues, Retrying ...")
            else:
                print(err)
            continue
        print(no)'''
        time.sleep(3)
        for i in range(2):
            close()
webbrowser.open("example.com")
time.sleep(0.2)     
links()
close()

i was hoping to see if anyone could tell me if there were some alternative modules or some alternative method of doing this.

Why does filtering based on a condition results in an empty DataFrame in pandas?

I'm working with a DataFrame in Python using pandas, and I'm trying to apply multiple conditions to filter rows based on temperature values from multiple columns. However, after applying my conditions and using dropna(), I end up with zero rows even though I expect some data to meet these conditions.

The goal is compare with Ambient temp+40 C and if the value is more than this, replace it with NaN. Otherwise, keep the original value.

Here's a sample of my DataFrame and the conditions I'm applying:

data = {
    'Datetime': ['2022-08-04 15:06:00', '2022-08-04 15:07:00', '2022-08-04 15:08:00', 
                 '2022-08-04 15:09:00', '2022-08-04 15:10:00'],
    'Temp1': [53.4, 54.3, 53.7, 54.3, 55.4],
    'Temp2': [57.8, 57.0, 87.0, 57.2, 57.5],
    'Temp3': [59.0, 58.8, 58.7, 59.1, 59.7],
    'Temp4': [46.7, 47.1, 80, 46.9, 47.3],
    'Temp5': [52.8, 53.1, 53.0, 53.1, 53.4],
    'Temp6': [50.1, 69, 50.3, 50.3, 50.6],
    'AmbientTemp': [29.0, 28.8, 28.6, 28.7, 28.9]
}
df1 = pd.DataFrame(data)
df1['Datetime'] = pd.to_datetime(df1['Datetime'])
df1.set_index('Datetime', inplace=True)

Code:

temp_cols = ['Temp1', 'Temp2', 'Temp3', 'Temp4', 'Temp5', 'Temp6']
ambient_col = 'AmbientTemp'

condition = (df1[temp_cols].lt(df1[ambient_col] + 40, axis=0))

filtered_df = df1[condition].dropna()
print(filtered_df.shape)

Response:

(0, 99)

Problem:

Despite expecting valid data that meets the conditions, the resulting DataFrame is empty after applying the filter and dropping NaN values. What could be causing this issue, and how can I correct it?

โŒ
โŒ