โŒ

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

How do I update the Google Colab Notebook kernel? I am running an old version (0.0.1a2) and I can't update to the current version

I am running an ancient version of the Colab notebook kernel on colab.research.google.com. Iโ€™ve tried everything to update the kernel, but nothing has worked.

I am running version 0.0.1a2 (I think this is an alpha version) while the most recent version is 2.0.6!!!

I would appreciate getting help with this problem as many modules don't work properly with the antiquated notebook.

My futile attempts to update the Colab notebook kernel have consisted of:

  • Trying different browsers.
  • Clearing cache and cookies.
  • Restarting the Colab notebook runtime with GPU and TPU runtimes.
  • Pip installs colab from the notebook.
  • Manually install colab with the colab-0.3.2.tar.gz file. The tarball had a blank file in it.
  • Manually install colab from the Github Repo. That didn't work either.

enter image description here

I'm using sklearn 1.4.1 but random forest still cannot handle missing values

I've read that random forest algorithm in sklearn > 1.4 should be able to handle NaN. I've checked that I've the latest version of Sklearn.

! pip install --upgrade scikit-learn

import sklearn
print(sklearn.__version__)

1.4.1

however i still get the error:

ValueError: Input X contains NaN.
RandomForestClassifier does not accept missing values encoded as NaN natively. For supervised learning, you might want to consider sklearn.ensemble.HistGradientBoostingClassifier and Regressor which accept missing values encoded as NaNs natively. Alternatively, it is possible to preprocess the data, for instance by using an imputer transformer in a pipeline or drop samples with missing values. See https://scikit-learn.org/stable/modules/impute.html You can find a list of all estimators that handle NaN values at the following page: https://scikit-learn.org/stable/modules/impute.html#estimators-that-handle-nan-values

Why? Should I import something else? I'm confused.

edit:

this is a minimal code that should give the error I've mentioned:

import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer

# Example DataFrame with NaN values
data = {
    "tipo_locazione": ["A", "B", None, "A"],
    "flg_polizza_caa": [1, 0, 1, 0],
    "cl_bisogni_3": [0, 1, 1, 0]
}
df = pd.DataFrame(data)

def random_forest_model(variabili):
    X = df[variabili]
    y = df['cl_bisogni_3'].astype(str)

    # Identifying categorical features
    categorical_features = X.select_dtypes(include=['object']).columns

    # Transformer for categorical features
    categorical_transformer = Pipeline(steps=[
        ('onehot', OneHotEncoder(handle_unknown='ignore'))
    ])

    # Preprocessor to apply transformations
    preprocessor = ColumnTransformer(
        transformers=[
            ('cat', categorical_transformer, categorical_features)
        ],
        remainder='passthrough'
    )

    # Model pipeline
    model = Pipeline(steps=[
        ('preprocessor', preprocessor),
        ('classifier', RandomForestClassifier(random_state=42))
    ])

    # Splitting the dataset
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

    # Fitting the model
    model.fit(X_train, y_train)

    print("Model trained successfully")

# Attempt to train the model with NaN values
variables = ['tipo_locazione', 'flg_polizza_caa']
random_forest_model(variables)

edit2 my traceback error:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-75-1a77bdc05207> in <cell line: 65>()
     66     selected_variables = sample(variabili_ab, 2)  # Adjust number to be <= length of variabili_ab
     67 
---> 68     metrics = random_forest_model(selected_variables)
     69     results[tuple(selected_variables)] = metrics
     70     count_cicli -= 1

8 frames
<ipython-input-75-1a77bdc05207> in random_forest_model(variabili)
     47 
     48     # Fitting the model
---> 49     model.fit(X_train, y_train)
     50     y_pred = model.predict(X_test)
     51 

/usr/local/lib/python3.10/dist-packages/sklearn/base.py in wrapper(estimator, *args, **kwargs)
   1472                 )
   1473             ):
-> 1474                 return fit_method(estimator, *args, **kwargs)
   1475 
   1476         return wrapper

/usr/local/lib/python3.10/dist-packages/sklearn/pipeline.py in fit(self, X, y, **params)
    473             if self._final_estimator != "passthrough":
    474                 last_step_params = routed_params[self.steps[-1][0]]
--> 475                 self._final_estimator.fit(Xt, y, **last_step_params["fit"])
    476 
    477         return self

/usr/local/lib/python3.10/dist-packages/sklearn/base.py in wrapper(estimator, *args, **kwargs)
   1472                 )
   1473             ):
-> 1474                 return fit_method(estimator, *args, **kwargs)
   1475 
   1476         return wrapper

/usr/local/lib/python3.10/dist-packages/sklearn/ensemble/_forest.py in fit(self, X, y, sample_weight)
    375         estimator = type(self.estimator)(criterion=self.criterion)
    376         missing_values_in_feature_mask = (
--> 377             estimator._compute_missing_values_in_feature_mask(
    378                 X, estimator_name=self.__class__.__name__
    379             )

/usr/local/lib/python3.10/dist-packages/sklearn/tree/_classes.py in _compute_missing_values_in_feature_mask(self, X, estimator_name)
    212 
    213         if not self._support_missing_values(X):
--> 214             assert_all_finite(X, **common_kwargs)
    215             return None
    216 

/usr/local/lib/python3.10/dist-packages/sklearn/utils/validation.py in assert_all_finite(X, allow_nan, estimator_name, input_name)
    214     Test failed: Array contains non-finite values.
    215     """
--> 216     _assert_all_finite(
    217         X.data if sp.issparse(X) else X,
    218         allow_nan=allow_nan,

/usr/local/lib/python3.10/dist-packages/sklearn/utils/validation.py in _assert_all_finite(X, allow_nan, msg_dtype, estimator_name, input_name)
    124         return
    125 
--> 126     _assert_all_finite_element_wise(
    127         X,
    128         xp=xp,

/usr/local/lib/python3.10/dist-packages/sklearn/utils/validation.py in _assert_all_finite_element_wise(X, xp, allow_nan, msg_dtype, estimator_name, input_name)
    173                 "#estimators-that-handle-nan-values"
    174             )
--> 175         raise ValueError(msg_err)
    176 
    177 

ValueError: Input X contains NaN.
RandomForestClassifier does not accept missing values encoded as NaN natively. For supervised learning, you might want to consider sklearn.ensemble.HistGradientBoostingClassifier and Regressor which accept missing values encoded as NaNs natively. Alternatively, it is possible to preprocess the data, for instance by using an imputer transformer in a pipeline or drop samples with missing values. See https://scikit-learn.org/stable/modules/impute.html You can find a list of all estimators that handle NaN values at the following page: https://scikit-learn.org/stable/modules/impute.html#estimators-that-handle-nan-values

print(pd.__version__)
print(np.__version__)

2.0.3
1.25.2

Additional BERT inference became slower and kill RAM

I have pretrained BERT model for russian language, which I downloaded from the hugging face in google colab. I am trying to infer vector reprsentation after BERT work - pooler_output. For this purpose code works fine, but for several forward passes. After them my session crashes because of the RAM limit. Why is it the case and why all was fine for few steps? And how can I work around this?

I also noticed that time for processing for each next batch is mor than for the previous one. Maybe it may help.

from transformers import AutoTokenizer, AutoModel
import torch
model = AutoModel.from_pretrained('ai-forever/ruBert-base')
tokenizer = AutoTokenizer.from_pretrained('ai-forever/ruBert-base')


def process_texts(raw_texts):
  texts_raw_list = raw_texts.to_list()
  text_tokens = tokenizer(texts_raw_list, truncation=True, padding=True, return_tensors='pt', max_length = 512)#.input_ids
  inputs = text_tokens['input_ids']
  attention_mask = text_tokens['attention_mask']
  inputs = inputs.to(device_name)
  attention_mask = attention_mask.to(device_name)
  result = model(inputs, attention_mask=attention_mask).pooler_output
  return result

n = 0

for g in range(2734129//48000 + 1):
  total = pd.DataFrame()
  z = pd.read_csv('/drive/MyDrive/data/unparsed.csv', skiprows = 42000 + 48000*g, nrows = 48000)
  z.columns = ['unnamed', 'description', 'vacancy_id']
  raw = z['description']
  for i in tqdm(range(48000)):
    if i%8 == 0:
      res = process_texts(raw.iloc[i*8:i*8+8])
      z = pd.concat([z[['vacancy_id']].reset_index(inplace=False, drop=True), pd.DataFrame(res.tolist()).reset_index(inplace=False, drop=True)], axis = 1)
      total = pd.concat([total, z], axis = 0, ignore_index = True)
    if (i+1)%1200 == 0:
      total.to_csv('/drive/MyDrive/data_embs/embeds/'+str(n)+'.csv')
      total = pd.DataFrame()
      n+=1
  total.to_csv('/drive/MyDrive/data_embs/embeds/'+str(n)+'.csv')
  total = pd.DataFrame()

Gradio on Colab has Issue with package (typing_extensions)

I am using Google Colab T4 session , where i need my gradio app as web UI for my model. The Notebook was OK , which means it was Running without any ERROR in the start , But one day suddenly it started Showing the following

ERROR:-

/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_runtime.py:184: UserWarning: Pydantic is installed but cannot be imported. Please check your installation. `huggingface_hub` will default to not using Pydantic. Error message: '{e}'
  warnings.warn(
---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-19-d705c8d6a1a0> in <cell line: 1>()
----> 1 import gradio as gr
      2 def input(area,bedroom,bathrooms,parking,stories):
      3   input = [area,bedroom,bathrooms,parking,stories]
      4   output = lm.predict([input])
      5   return int(output)

18 frames
/usr/local/lib/python3.10/dist-packages/pydantic/_internal/_core_utils.py in <module>
     14 from pydantic_core import CoreSchema, core_schema
     15 from pydantic_core import validate_core_schema as _validate_core_schema
---> 16 from typing_extensions import TypeAliasType, TypeGuard, get_args, get_origin
     17 
     18 from . import _repr

ImportError: cannot import name 'TypeAliasType' from 'typing_extensions' (/usr/local/lib/python3.10/dist-packages/typing_extensions.py)

Reasons for the Error (According to me) Due to Colab resent update create a clash of type_extension version

I tried to Make a Gradio app that Can predict House Price based on the Trained model

T5Tokenizer requires the SentencePiece library but it was not found in your environment

I am trying to explore T5

this is the code

!pip install transformers
from transformers import T5Tokenizer, T5ForConditionalGeneration
qa_input = """question: What is the capital of Syria? context: The name "Syria" historically referred to a wider region,
 broadly synonymous with the Levant, and known in Arabic as al-Sham. The modern state encompasses the sites of several ancient 
 kingdoms and empires, including the Eblan civilization of the 3rd millennium BC. Aleppo and the capital city Damascus are 
 among the oldest continuously inhabited cities in the world."""
tokenizer = T5Tokenizer.from_pretrained('t5-small')
model = T5ForConditionalGeneration.from_pretrained('t5-small')
input_ids = tokenizer.encode(qa_input, return_tensors="pt")  # Batch size 1
outputs = model.generate(input_ids)
output_str = tokenizer.decode(outputs.reshape(-1))

I got this error:

---------------------------------------------------------------------------

ImportError                               Traceback (most recent call last)

<ipython-input-2-8d24c6a196e4> in <module>()
      5  kingdoms and empires, including the Eblan civilization of the 3rd millennium BC. Aleppo and the capital city Damascus are
      6  among the oldest continuously inhabited cities in the world."""
----> 7 tokenizer = T5Tokenizer.from_pretrained('t5-small')
      8 model = T5ForConditionalGeneration.from_pretrained('t5-small')
      9 input_ids = tokenizer.encode(qa_input, return_tensors="pt")  # Batch size 1

1 frames

/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py in requires_sentencepiece(obj)
    521     name = obj.__name__ if hasattr(obj, "__name__") else obj.__class__.__name__
    522     if not is_sentencepiece_available():
--> 523         raise ImportError(SENTENCEPIECE_IMPORT_ERROR.format(name))
    524 
    525 

ImportError: 
T5Tokenizer requires the SentencePiece library but it was not found in your environment. Checkout the instructions on the
installation page of its repo: https://github.com/google/sentencepiece#installation and follow the ones
that match your environment.


--------------------------------------------------------------------------

after that I install sentencepiece library as was suggested like this:

!pip install transformers
!pip install sentencepiece

from transformers import T5Tokenizer, T5ForConditionalGeneration
qa_input = """question: What is the capital of Syria? context: The name "Syria" historically referred to a wider region,
 broadly synonymous with the Levant, and known in Arabic as al-Sham. The modern state encompasses the sites of several ancient 
 kingdoms and empires, including the Eblan civilization of the 3rd millennium BC. Aleppo and the capital city Damascus are 
 among the oldest continuously inhabited cities in the world."""
tokenizer = T5Tokenizer.from_pretrained('t5-small')
model = T5ForConditionalGeneration.from_pretrained('t5-small')
input_ids = tokenizer.encode(qa_input, return_tensors="pt")  # Batch size 1
outputs = model.generate(input_ids)
output_str = tokenizer.decode(outputs.reshape(-1))

but I got another issue:

Some weights of the model checkpoint at t5-small were not used when initializing T5ForConditionalGeneration: ['decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight']

  • This IS expected if you are initializing T5ForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  • This IS NOT expected if you are initializing T5ForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).

so I did not understand what is going on, any explanation?

Running RAM excessive LLMs on Google Colab

I am trying to run an LLM named OpenAssistant/oasst-sft-1-pythia-12b on Google Colabs The code I have is as follow:

from transformers import AutoTokenizer, AutoModelForCausalLM



MODEL_NAME = "OpenAssistant/oasst-sft-1-pythia-12b"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCausalLM.from_pretrained(
  MODEL_NAME,
)

and

input_text= """<|prompter|>"A bit of prompt here"<|endoftext|><|assistant|>"""
input_ids = tokenizer(input_text, return_tensors="pt").input_ids



text = model.generate(input_ids, max_length=256).generated_text
print(text)

The problem is that the free version of Google Colab offers only 12.7 GB of RAM. But when I run the above code the RAM goes out of memory and the session crashes as the model is too large to fit in RAM.

I tried searching for solution on web. There are scenarios where people are facing similar problem while training a model. So the suggested solution is to use smaller batch sizes.

But when running the model to generate text, It needs to load model in RAM, is there a way around?

Google Colab "ModuleNotFoundError"

Since yesterday I have been trying to run a git project in Google Collab. However, I have the problem that it does not recognize the modules. However, the modules are stored in Drive. With another account without the Pro version, everything worked perfectly.

2023-11-19 13:11:41.722619: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2023-11-19 13:11:41.722685: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2023-11-19 13:11:41.722737: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2023-11-19 13:11:43.215387: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT Traceback (most recent call last): File "/content/drive/MyDrive/apbench/poisons_generate.py", line 13, in from attack import ar_poisons, dc_poisons, em_poisons, hypo_poisons, lsp_poisons, ops_poisons, rem_poisons, tap_poisons, ntga_poisons File "/content/drive/MyDrive/apbench/attack/ntga_poisons.py", line 6, in from data.ntga_base.attacks.projected_gradient_descent import projected_gradient_descent ModuleNotFoundError: No module named 'data'

No such file or directory issue when using github codes on Colab

I'm currently working on Geogramint tool, and here is my problem:

When a github repo is cloned to the colab, it usually requires updating the path with the name of the main directory.

Code on github:

git clone https://github.com/Alb-310/Geogramint.git  
cd Geogramint/  
pip install -r requirements.txt  

python geogramint.py # for GUI mode  
python geogramint.py --help # for CLI mode  

Code to run it on colab:

!git clone https://github.com/Alb-310/Geogramint.git  
!cd Geogramint/  
!pip install -r **Geogramint**/requirements.txt  
!python3 **Geogramint**/geogramint.py --help # for CLI mode  

So on colab, giving the right directory usually fixes the issue for the tools I use. However, with this specific app, I received the following error:

FileNotFoundError: [Errno 2] No such file or directory: 'appfiles/config.ini'

So clearly, the app cannot find the directory since it is located in 'Geogramint/appfiles/config.ini'

For more clarification:

/content/Geogramint/CLI/settings_cli.py:44 in saveConfig                                         โ”‚
โ”‚                                                                                                  โ”‚
โ”‚   41 โ”‚   โ”‚   โ”‚   โ”‚   โ”‚    'HASH': str(api_hash),                                                 โ”‚
โ”‚   42 โ”‚   โ”‚   โ”‚   โ”‚   โ”‚    'PHONE': str(phone_number)}                                            โ”‚
โ”‚   43 โ”‚   config['REPORT'] = {'EXTENDED': extended_report}                                        โ”‚
โ”‚ โฑ 44 โ”‚   **with open('appfiles/config.ini', 'w') as configfile:**                                    โ”‚
โ”‚   45 โ”‚   โ”‚   config.write(configfile)                                                            โ”‚
โ”‚   46 โ”‚   return loadConfig() 

How can I order it to go the right directory?

Thanks in advance for your time.

Colab and Bigquery: endless 403 errors

My goal is to access my BigQuery data in a Colab notebook, but despite this being what seems like a fairly straightforward task I'm getting endless 403 errors.

I've added a service worker to my project and permission it the following ways:

BigQuery Admin
BigQuery Data Editor
BigQuery Data Viewer
BigQuery User
Viewer
Editor

According to the Policy Analyzer each of these is applied to the project and the datasets. I've created an API key and imported that to the Colab. Despite that I get a 403 access denied error each time I run the following:

from google.cloud import bigquery
from google.oauth2 import service_account
import google.auth
credentials = service_account.Credentials.from_service_account_file('pathtoapikey.json')

project_id = 'myproject'
client = bigquery.Client(credentials= credentials,project=project_id)


query_job = client.query("""
   SELECT *
   FROM myproject.mydataset.table_name
   LIMIT 1000 """)

results = query_job.result()

Error:

Forbidden                                 Traceback (most recent call last)
<ipython-input-12-208d568225a8> in <cell line: 15>()
     13    LIMIT 1000 """)
     14 
---> 15 results = query_job.result()

5 frames
/usr/local/lib/python3.10/dist-packages/google/cloud/bigquery/job/query.py in result(self, page_size, max_results, retry, timeout, start_index, job_retry)
   1518                 do_get_result = job_retry(do_get_result)
   1519 
-> 1520             do_get_result()
   1521 
   1522         except exceptions.GoogleAPICallError as exc:

/usr/local/lib/python3.10/dist-packages/google/api_core/retry.py in retry_wrapped_func(*args, **kwargs)
    347                 self._initial, self._maximum, multiplier=self._multiplier
    348             )
--> 349             return retry_target(
    350                 target,
    351                 self._predicate,

/usr/local/lib/python3.10/dist-packages/google/api_core/retry.py in retry_target(target, predicate, sleep_generator, timeout, on_error, **kwargs)
    189     for sleep in sleep_generator:
    190         try:
--> 191             return target()
    192 
    193         # pylint: disable=broad-except

/usr/local/lib/python3.10/dist-packages/google/cloud/bigquery/job/query.py in do_get_result()
   1508                     self._job_retry = job_retry
   1509 
-> 1510                 super(QueryJob, self).result(retry=retry, timeout=timeout)
   1511 
   1512                 # Since the job could already be "done" (e.g. got a finished job

/usr/local/lib/python3.10/dist-packages/google/cloud/bigquery/job/base.py in result(self, retry, timeout)
    909 
    910         kwargs = {} if retry is DEFAULT_RETRY else {"retry": retry}
--> 911         return super(_AsyncJob, self).result(timeout=timeout, **kwargs)
    912 
    913     def cancelled(self):

/usr/local/lib/python3.10/dist-packages/google/api_core/future/polling.py in result(self, timeout, retry, polling)
    259             # pylint: disable=raising-bad-type
    260             # Pylint doesn't recognize that this is valid in this case.
--> 261             raise self._exception
    262 
    263         return self._result

Forbidden: 403 Access Denied: BigQuery BigQuery: Permission denied while globbing file pattern.

Location: US
Job ID: 

Things I've tried:

  1. Deleting all service workers

  2. Re-permissioning everything

  3. Removing all permissions from all users other than owner, creating

  4. New service worker, then adding permissions.

  5. Creating new API keys (and updating in colab/etc)

  6. Tearing the whole thing down and starting from scratch

Etc. It really seems like this should be a pretty easy permissioning item, but for the life of me I can't find the missing link. Any help greatly appreciated.

How to run docker on google colab

I tried installing docker on google colab using the following commands I found in another question:

%%shell     

sudo apt update -qq

sudo apt install apt-transport-https ca-certificates curl software-properties-common -qq

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable" -y

sudo apt update -qq -y

sudo apt install docker-ce -y

docker

Then I try confirming docker works with the following:

!docker --version
!docker info
!docker run hello-world

Which gives the following info and error:

Docker version 24.0.2, build cb74dfc Client: Docker Engine - Community Version: 24.0.2 Context: default Debug Mode: false Plugins: buildx: Docker Buildx (Docker Inc.) Version: v0.10.5 Path: /usr/libexec/docker/cli-plugins/docker-buildx compose: Docker Compose (Docker Inc.) Version: v2.18.1 Path: /usr/libexec/docker/cli-plugins/docker-compose

Server: ERROR: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? errors pretty printing info docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?. See 'docker run --help'.

Then I've tried various command to start the docker daemon, such as !sudo systemctl start docker, but nothing worked.

Why you're closing RVC Files and it's Colab Script?

What's your problem? Why are you banning RVC in the Colab? RVC is just an AI Software that everyone can use it. Also for the insult in the injury, you're banning some innocent users because of using this script. RVC is no intention to harm others privacy. We're using that Colab Script to work, so why are you banning it!? The RVC has its own team to develop that script, and you just ruin it. DON'T BAN OUR SOFTWARE BECAUSE WE'RE WORKING THERE! THE TEAMS ARE INNOCENT AND THEY WORK HARD ALL DAY TO DEVELOP IT. YOU SHOULD BE ASHAMED FOR BANNING OUR SOFTWARE WE'RE USING FOR OUR COMMUNITY, BECAUSE YOU RUINED EVERYTHING BY JUST UNFAIRLY BANNING RVC! DON'T BAN IT, OR EVERYBODY USE FORCE TO UNBAN IT!! WE'RE WORKING OUR PROJECTS THERE!!! STOP BANNING IT AND LEAVE THEIR INNOCENT SCRIPT ALONE!!!!!!

They're trying to ban the unmalicious Colab Script of RVC, and they start banning RVC Files and some users for using RVC. STOP IT!!! YOU'RE MAKING US FIRED IN THE COMMUNITY WE'RE WORKING ON.

How to upload csv file (and use it) from google drive into google colaboratory

Wanted to try out python, and google colaboratory seemed the easiest option.I have some files in my google drive, and wanted to upload them into google colaboratory. so here is the code that i am using:

!pip install -U -q PyDrive

from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials

# 1. Authenticate and create the PyDrive client.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)

# 2. Create & upload a file text file.
uploaded = drive.CreateFile({'xyz.csv': 'C:/Users/abc/Google Drive/def/xyz.csv'})
uploaded.Upload()
print('Uploaded file with title {}'.format(uploaded.get('title')))

import pandas as pd
xyz = pd.read_csv('Untitled.csv')

Basically, for user "abc", i wanted to upload the file xyz.csv from the folder "def". I can upload the file, but when i ask for the title it says the title is "Untitled". when i ask for the Id of the file that was uploaded, it changes everytime, so i can not use the Id.

How do i read the file??? and set a proper file name???

xyz = pd.read_csv('Untitled.csv') doesnt work
xyz = pd.read_csv('Untitled') doesnt work
xyz = pd.read_csv('xyz.csv') doesnt work

Here are some other links that i found..

How to import and read a shelve or Numpy file in Google Colaboratory?

Load local data files to Colaboratory

Looping through class objects to find certain attributes using google colab

I am trying to find a way to loop through the different skills for my game my code is below


class Skill:
  def __init__(self, user, desc, effect, ea, amount, am):
    self.user = user
    self.desc = desc
    self.effect = effect
    self.amount = amount
    self.am = am

s1 = Skill("user1", "lightning blade", "Decrease", "health", 165419, 34743)
s2 = Skill("user1", "hurricane dash", "Decrease", "Speed", 113479, 95791)
s3 = Skill("user2", "earthquake", "Increase", "attack", 163479, 35791)

#want to loop through class object to auto find "lightning blade" skill

the objective is to print / use the skill 'lightning blade' from the list of objects above ignoring the "hurricane dash" from the same user and user2's skill "earthquake" when "lightning blade" is found I also want to get the amount and am attributes

Google Colab: NotImplementedError: RarFile supports only mode=r

I'm trying to create a separate .rar archive for each folder in my path. I'm using Google Colab to do this, but I'm getting this error back

---------------------------------------------------------------------------
NotImplementedError                       Traceback (most recent call last)
<ipython-input-42-544377e1db61> in <cell line: 8>()
      8 for folder in os.listdir(current_directory):
      9     # Create a RAR archive for each folder
---> 10     with rarfile.RarFile(folder + ".rar", "w") as archive:
     11         # Add all the files in the folder to the archive
     12         folder_path = os.path.join(current_directory, folder)

/usr/local/lib/python3.10/dist-packages/rarfile.py in __init__(self, file, mode, charset, info_callback, crc_check, errors)
    685 
    686         if mode != "r":
--> 687             raise NotImplementedError("RarFile supports only mode=r")
    688 
    689         self._parse()

NotImplementedError: RarFile supports only mode=r

I am using this code

import os
import rarfile

# Get the current directory
current_directory = os.getcwd()

# Iterate over all the folders in the current directory
for folder in os.listdir(current_directory):
    # Create a RAR archive for each folder
    with rarfile.RarFile(folder + ".rar", "w") as archive:
        # Add all the files in the folder to the archive
        folder_path = os.path.join(current_directory, folder)
        for file in os.listdir(folder_path):
            file_path = os.path.join(folder_path, file)
            archive.write(file_path, os.path.basename(file_path))

# Print a message to the user
print("All folders have been compressed into RAR archives.")

Is there any way to do that?

Restart kernel in Google Colab

Iยดm trying to restart the kernel in a Google Colab Jupyter Notebook through a cell. The option given previously:

import os
os._exit(00)

is ok, but it seems to me that this is not a very "pythonic" way of restarting the kernel. The other option:

from IPython.core.display import HTML
HTML("<script>Jupyter.notebook.kernel.restart()</script>")

seems more "pythonic" (better) to me, but it is not working.

Is there something specific to Google Colab that I should have done?

Best regards,

Gustavo,

Run AutoGPT in Google Colab. Chrome not reachable

I want to run AutoGPT in Colab but fail with

  System: Command browse_website returned: Error: Message: unknown error: Chrome failed to start: exited abnormally.
  (chrome not reachable)
  (The process started from chrome location /usr/bin/chromium-browser is no longer running, so ChromeDriver is assuming that Chrome has crashed.)

I tested this to install Chrome like this

!apt-get update
!apt-get install -y chromium-browser

Checking

!whereis chromium-browser 

tells it is in

 /usr/bin/chromium-browser 

It's quite unclear to me how to debug this. Any idea. Firefox also failed

Is there a way to fix NameError? [closed]

My coding for an AI robot is not working and I need help with retrieving the system name and using it in code. This is my code and error.

from os import name
message =  Hello, (name)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-8-8441385e4166> in <cell line: 2>()
      1 from os import name
----> 2 message =  Hello, (name)

NameError: name 'Hello' is not defined

`

โŒ
โŒ