Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Fatal error: glibc. Trying to put char in file

I'm trying to put and get+print four char characters in a file by promting the user for four chars. However, there is an error (Fatal error: glibc detected an invalid stdio handle).

After debugging I found the fault at the line fputc(c[3], putc). After prompting the user for the fourth time, it cannot put the fourth char c[3] in the file.

Please help me to understand.

Here's my code:

    FILE* putc = fopen("test3.txt", "w");

    if(putc == NULL)
    {
        return 1;
    }

    char c[4];

    for(int i = 0; i < 4; i++)
    {
        printf("char: ");
        scanf("%s", &c[i]);
        fputc(c[i], putc);
    }


    fclose(putc);

    FILE* getc = fopen("test3.txt", "r");

    if(getc == NULL)
    {
        return 1;
    }

    char abc;

    while ((abc = fgetc(getc)) != EOF)
    {
        printf("%c", abc);
    }
    printf("\n");

    fclose(getc);

lmdb read slow when file amout is a little larger

I installed lmdb from python:

python -m pip install lmdb

Then I write images into it, I have two settings, one is inserting 6800 images, and the other is 10000 images. All the images have size of 1024x2048, and they are in png format, and my memory size is 1T.

Here is part of the code piece with which I read images from it:

  def init_lmdb(self):
        self.env = lmdb.open(
                self.file_root,
                map_size=2**40,
                readonly=True,
                max_readers=512,
                readahead=False,
                )

    def get_bins(self, inds):
        im_bins = []
        with self.env.begin(write=False) as txn:
            for ind in inds:
                impth = self.im_paths[ind]
                im_bin = txn.get(impth)
                im_bins.append(np.frombuffer(im_bin, dtype=np.uint8))
        return im_bins

I used identical code to read from the two lmdb files, I found that the first lmdb with 6800 images needs about 9s to read 6400 images in random order, while the second lmdb with 10000 images requires 16s to read same amount of images in random order.

Would you tell me what is the cause of the difference ?

PS: I do not read images for only once. Actually, I read from only half of the images. For the lmdb with 6800 images, I only read from the 3400 of them. Also, for the lmdb with 10000 images, I only read from the subset of 5000 images. The images are read in random order, and images can be read more than one time. Until the total amount of read reaches 6400, I tested the time.

Why don't local variable initialisation instructions contribute to the executable size as much as global variables do?

The text I am reading says:

Global variables are located in executable image, so using number of global variables will increase the size of executable image.

Questions:

  1. What does 'located in executable image' mean? From what I have read the global variables are located in the 'data' section of the executable. I presume that local variable initialisation instructions are stored in the 'text' section. So why don't the local variable initialisation instructions take up about the same amount of space as the global variables do?

  2. By executable, does it mean here the executable loaded into the memory or the executable that is only on the non-volatile memory? Will the global variables also take up more space for executable that is not loaded into the RAM?

  3. Are there books or concise reading resources I may refer to that will help me with such lower level concepts?

I expected the size of the local variable initialization instructions to take up the same amount of space in executable as global variables do. Consider the following program:

#include <stdlib.h>

int global_var = 10

int main(void){

    int local_var = 20;
    return EXIT_SUCCESS;
}

When converted to machine level executable (assuming its not loaded into memory/not a process), I assume both definition and initialisation of global_var and local_var will be encoded as machine level code, albeit in different sections (data and text) of executable. So why will global_var take up more space?

Why can I not increase my JAVA heap space?

I am working on a project in which execution at some size of data requires additionnal heap space, The actual problem is that I tried multiple approaches to solve but all in vain.

At first, I started by editing my launch.json file since I am using VSCode, tried adding the Xmx parameter to the vmArgs. nothing, I added a portion of code to my Main class which goes as follows:

 System.out.println("====================================================================");
 System.out.println("JVM arguments:");
 RuntimeMXBean runtimeMxBean = ManagementFactory.getRuntimeMXBean();
 List<String> jvmArgs = runtimeMxBean.getInputArguments();
 for (String arg : jvmArgs) {
 System.out.println(arg);
 }

 // Print actual heap size used by the JVM
 System.out.println("Max Memory: " + (Runtime.getRuntime().maxMemory() / (1024 * 1024)) + " MB");
 // Your application code goes here
 System.out.println("====================================================================");

 long maxMemory = rutime.maxMemory();
 System.out.println("Max Memory: " + maxMemory / (1024 * 1024) + " MB");

This code displays jvm Arguments and of course actual max memory to be used by JAVA on my system. It is initially set at 1896 MB. Output:

==================================================================== JVM arguments: -XX:+ShowCodeDetailsInExceptionMessages Max Memory: 1896 MB

Max Memory: 1896 MB

I tried adding the JAVA-OPTS variable to the system... nothing changed after re-executing...

./././.>"C:\Program Files\Java\jdk-15.0.2\bin\java.exe" -Xmx2G -cp <path_to_my_main_class> Main

tried executing from terminal and specifying the Xmx value, and my Main class weren't recognized, even though I multiple-checked syntaxe, path and class name, same thing on cmd and vscode terminal (windows 11 btw) And still when I execute under VScode the main class runs correctly before returning the java.lang.outOfMemoryerror.

also I noticed that I had two different version of java on my pc by executing the command:>where java C:\Program Files\Common Files\Oracle\Java\javapath\java.exe C:\Program Files (x86)\Common Files\Oracle\Java\javapath\java.exe

while VSCode execution starts this way: PS D:\Studies\M1\S2\TPs\méth\Projet Méta> d:; cd 'd:\Studies\M1\S2\TPs\méth\Projet Méta'; & 'C:\Program Files\Java\jdk-15.0.2\bin\java.exe' '-XX:+ShowCodeDetailsInExceptionMessages' '-cp' 'C:\Users\wahee\AppData\Roaming\Code\User\workspaceStorage\bdc4f5af61bc4e992a3808de6bdc92e9\redhat.java\jdt_ws\Projet Méta_de071d7c\bin' 'Main'

seems to specify another java instance... that I added to path system variable, and still nothing (made sure to constantly restart my PC)

I browsed satck overflow, google, and pushed chatGPT to his limits to a point it's just keeps apologizing and repeating steps over and over again...

I also accessed the controle panel to add -Xmx and -Xms arguments within a java interface using a specific dialogue box.... No change...

I am running on a x64 architecture... Guys... What Am I missing in all this? Or should I try something else? Thank you.

Would event listeners prevent garbage collecting objects referenced in outer function scopes?

Let's assume I have the following code:

(function () {

    const largeObject = provideSomeLargeObject();
    const largeStaticListOfElements = document.querySelectorAll('span');
    const someElementThatWillBeRemoved = document.getElementById('i-will-be-removed');

    const elms = document.getElementsByClassName('click-me');

    for (let i = 0; i < elms.length; ++i) {

        const anotherLargeObject = provideAnotherLargeObject();

        elms[i].addEventListener('click', function (e) {
            // callback just uses i, which contains a literal value
            e.preventDefault();
            console.log(i);
        });
    }

    // largeObject, largeStaticListOfElements, someElementThatWillBeRemoved, and anotherLargeObject
    // are no longer used in this code. Should they be set to null to allow GC them?

})();

The event listeners create function scopes, and after executing the above code, the event listeners still exist, thus the outer function scopes are preserved so that the event handlers can use the variables they contain.

However, the event handlers just use some literal value from the outer scopes, but in these scopes there are other variables, that are not used anymore.

  • Would the event listeners prevent garbage collecting the objects referenced by these variables, so I have to manually assign null to the variables at the end of the code,
  • or are JavaScript engines smart enough to detect that the event handler never uses the variables, so the referenced objects can be GC even if the variables reside in a still existing function scope?

GraphQL apollo caching query that has no id

I'm writing an app that is using anilist graphql backend. This backend provides a pagination with Page query. So the paginated query itself looks like that:

  1. For Anime screen:
    query ($page: Int, $perPage: Int) {
      Page (page: $page, perPage: $perPage) {
        media (type: ANIME) {
          id
          title {
            romaji
          }
        }
      }
    }
  1. For Manga screen:
    query ($page: Int, $perPage: Int) {
      Page (page: $page, perPage: $perPage) {
        media (type: MANGA) {
          id
          title {
            romaji
          }
        }
      }
    }

In my app there two screens Manga and Anime they both sends this query but with different type (ANIME or MANGA). And at this point I faced a problem. Page query has no id so it's impossible split in cache Page query for ANIME and Page query for MANGA. Pagination will also affect this problem as each time user reach the end of list I will send a new Page query. As I can see it should work in the next way:

  1. Page query for ANIME and for MANGA should be separated in cache in other words cache should somehow know that they are different objects.
  2. On response for a next page request I should merge media in Page of MANGA or in Page of ANIME.

Do you have any ideas how to obtain such functionality? For now I consider an option to clear cache on screen change, that will allow me to avoid a first part of problem, cache will always contain just one Page query. But then the second part of problem will stay. On each new page obtained Page in cache will be just overwritten.

corruption memory on heap only when using one specific function

I am creating a application that downloads stuff from a url into a file and having a big problem with getting the substring of the url to use to create as the downloads name. I have spent the last 4 hours researching, watching videos, asking ai and many more but I just simply cant figure this out. my problem is when I allocate memory on the heap to store the last sub string of a url that I get from my own function once the statement completes the value just gets completely corrupted to a crap ton of f. This only happens when using this one function of mine which I assume is causing some sort of memory corruption but I cant figure it out.

below you can see i am allocating a char the can hold up to 256 characters which should be more than enough for pretty much any substring of a url and declaring and initializing it using my getLastTextUrl function

    char* alloc2 = new char[256];
    const char* AIO = getLastTextUrl("https://habitual.cc/wp-content/uploads/2023/10/RuntimeFiles.zip", alloc2);

my getLastTextUrl function

const char* getLastTextUrl(const std::string &url, char* &alloc) 
{
    size_t pos = url.find_last_of("/") + 1;
    if (pos != std::string::npos) 
    {
        alloc = (char*)url.c_str() + pos;
        return url.c_str() + pos;
    }
    return url.c_str(); 
}

I have debugged it multiple times the function works correctly I believe after alloc = (char*)url.c_str() + pos;runs alloc is assigned correctly (RuntimeFiles.zip) I then step into twice to finish the function and it takes me back to what the main function and is still highlighting as it is debugging const char* AIO = getLastTextUrl("https://habitual.cc/wp-content/uploads/2023/10/RuntimeFiles.zip", alloc2); When hovering over alloc2 now it holds the correct string (RuntimeFiles.zip) but the moment I step into the next statement which is just char* alloc = new char[256]; alloc2 and AIO corrupts and becomes alot of f like I mentioned.

I am pretty new to c++ and this is my first time every posting on stack overflow sorry if there is some sloppy code or something doesnt make sense if anyone needs some clarification please let me know same with suggestions.

Allocating/Deallocating Memory in Rust compiled to Webassembly

My Situation
I have been trying to make an IO game with parts of the client in Rust, compiled with wasm-pack to Webassembly, which I access through Javascript. Rust is used here as a libary, and gets called when packages need to be handled and the game-logic needs to be handled. For it to be able to handle the logic though, it need the context of all the entities and the map in the game, which can be a lot of data. Now functions that need the game context are called hundreds of times per second and I dont want to implement any batching logic and just want to avoid the problem entirely. That's why I decided to use the memory system in WebAssembly (https://developer.mozilla.org/en-US/docs/WebAssembly/JavaScript_interface/Memory) to store the gamestate so I can retrieve it easily from Rust.

TLDR: I want to avoid passing the game context (Im making an IO Game, using JS and Rust as WASM using wasm-bindgen and wasm-pack) from Javascript to Rust and instead want to store it in memory.

Actual Problem
How do I now Allocate and Deallocate, Read from and Write to this Memory? My current implementation (below)

  • Allocation: Vec with Vec::with_capacity
  • Deallocation: std::slice::from_raw_parts_mut(ptr to the context, len) and let the slice go out of scope
  • Write: Set the derefenced value to T
  • Read: Derefence a pointer with the type T

lib.rs

pub mod memory_operations {
    [...] Includes
    
    ///Write
    unsafe fn write_mem<T>(ptr: *mut u8, x: T) {
        let ptr = ptr as *mut T;
        unsafe {
            *ptr = x;
        }
    }
    
    /// READ
    unsafe fn read_mem<'a, T>(ptr: *mut u8) -> &'a mut T {
        let ptr = ptr as *mut T;
        unsafe {
            &mut *ptr
        }
    }

    /// Allocate
    unsafe fn allocate_memory<T>() -> MemorySpace {
        let size = std::mem::size_of::<T>();
        let mut memory = Vec::with_capacity(size);
        let ptr: *mut u8 = memory.as_mut_ptr();
        std::mem::forget(memory);
        MemorySpace::new(ptr as *mut T)
    }

    /// Deallocate without getting T back and without calling the deconstructor
    unsafe fn deallocate_memory_raw(memory: MemorySpace) {
        let buffer = unsafe {
            std::slice::from_raw_parts_mut(memory.ptr(), memory.size())
        };
        drop(buffer);
    }

    /// Deallocate a Layout in memory as a type T and get that type T back
    unsafe fn deallocate_memory_type<'a, T>(memory: MemorySpace) -> &'a mut T {
        let ptr = memory.ptr() as *mut T;
        println!("Dealloc Type");
        unsafe { &mut *ptr }
    }
    
    #[derive(Debug, Clone, Copy)]
    /// Simple Wrapper for Layout and pointer to pass it to JS
    pub struct MemorySpace {
        offset: *mut u8, 
        size: usize,
        align: usize,
    }

    impl MemorySpace {
        pub fn new<T: Sized>(offset: *mut T) -> MemorySpace {
            let layout = 
            unsafe { alloc::Layout::for_value_raw(offset) };

            let offset = offset as *mut u8;
            MemorySpace {offset, size: layout.size(), align: layout.align()}
        }

        pub fn ptr(&self) -> *mut u8 {self.offset}
        pub fn size(&self) -> usize {self.size}
        pub fn align(&self) -> usize {self.align}

        pub fn layout(&self) -> Result<Layout, LayoutError> {
            Layout::from_size_align(self.size, self.align)
        }
    }
}

This currently fails when writing, seems to just not allocate or to allocate and deallocate the memory. Im currently just testing this as a binary instead of WASM file so I can debug the problems premturely.

Question
Im wondering if this is the correct approach of accessing memory this when compiling to wasm (If you think this memory thing is dumb, let me know). If that is the case, how would I fix this code to run as a binary and as a wasm file?

What is the difference Between 'Dirty Memory' and 'Dirty Size' in iOS VM Tracker?

In the WWDC 2022 session(https://developer.apple.com/videos/play/wwdc2022/10106/), they explained that memory allocation can be categorized into three types: Dirty, Compressed Dirty, and Clean. And Dirty memory refers to allocations like heap allocations, which tend to have a high resident memory ratio.

And when I profile iOS app with VM tracker in Xcode instruments, I can see allocations like malloc are categorized as Dirty type.

enter image description here

Still, I'm confused due to the presence of both a 'Dirty' memory type and a 'Dirty Size' metric at the top of the view.

My guess is that the Dirty memory 'type' refers to types of allocations like malloc, which are likely to result in dirty pages.

And Dirty Size metric might represent the cumulative size of allocated memory pages that have been written to and thus have become dirty pages.

For example, malloc is "Dirty Type" but it's "Dirty Size" can be almost zero bytes. -I can do allocations with malloc for new array but not assign any value to it, so committed pages won't be actually mapped to Physical memory.

int *array = malloc(2000 * sizeof(int));

Still, I'm not sure about it. I would be grateful if anyone can clarify the exact definitions of these terms.

java.lang.OutOfMemoryError during unit tests

My team and me are working on a API rest using Spring boot. It's composed by controllers, services and JpaRepo interfaces. Shamefully, is a monolitic app. "There's no time for it" is the answer then I suggest to change to microservices, so we have to work with what we have.

The API is growing thunderous, that includes the unit tests. Right know, we have 1267 individual test cases and more than 300 test classes (service and controllers included).

The service test classes are like this:

@SpringBootTest(classes = Project.class)
@TestPropertySource(locations = "classpath:test.properties")
@DirtiesContext(classMode=ClassMode.AFTER_CLASS)
public class Services_CategoryTBTest {

    @TestConfiguration
    static class Service_CategoryTbTestContextConfiguration{
        @Bean
        public GenericService<CategoryModifDTO, CategoryIdDTO, String> categoryService(){
            return new Service_CategoryTB_impl();
        }
    }

    @Autowired
    Service_CategoryTB_impl service;

    @MockBean
    CategoryRepository repository;

    private void setMock(String id) {
        Optional<CategoryTb> categoryTB = Optional.of(new CategoryTb());
        categoryTB.get().setCodcat(id);

        Mockito.when(repository.existsById(id)).thenReturn(true);
        Mockito.when(repository.findById(id)).thenReturn(categoryTB);
    }

    private void setSeveralMock() {

        CategoryTb categoryTB = new CategoryTb();
        categoryTB.setCodcat("1");

        Mockito.when(repository.findAll()).thenReturn(Arrays.asList(categoryTB));
    }

    @Test
    void getSpecificSuccessfully() {
        //Given
        String id = "1";
        setMock(id);

        //When
        CategoryIdDTO category = service.findById(id);

        //Then
        assertEquals(category.getCodcat(), id);
    }

    @Test
    void getAllSuccessfully() {

        //Given
        setSeveralsMock();

        //When
        List<CategoryIdDTO> categorys = service.findAll();

        //Then
        assertEquals(categorys.size(), 1);

    }

    @Test
    void createSuccessfully() {

        //Given
        CategoryIdDTO category = new CategoryIdDTO();
        category.setCodcat("1");
        category.setDescat("Desc");

        //When
        service.create(category);

        //Then
        Mockito.verify(repository).save(Mockito.any(CategoryTb.class));

        
    }

    @Test
    void updateSuccessfully() {

        //Given
        String id = "1";
        setMock(id);
        CategoryModifDTO category = new CategoryModifDTO();
        category.setDescat("Desc");

        //When
        service.update(id, category);

        //Then
        Mockito.verify(repository).save(Mockito.any(CategoryTb.class));

    }

    @Test
    void deleteSuccessfully() {

        //Given
        String id = "1";
        setMock(id);

        //When
        service.delete(id);

        //Then
        Mockito.verify(repository).delete(Mockito.any(CategoryTb.class));

    }

    @Test
    void testFailIfExistsByIdCheckSuccessfully() {

        //Given
        Mockito.when(repository.existsById(null)).thenReturn(true);
        
        assertThrows(HttpClientErrorException.class, () -> service.failIfExistsById(null));

    }

    @Test
    void testFailIfNotExistsByIdCheckSuccessfully() {

        //Given
        Mockito.when(repository.existsById(null)).thenReturn(false);
        
        assertThrows(ResourcesNotFoundException.class, () -> service.failIfNotExistsById(null));

    }
    
}

And controller test classes are like this:

@WebMvcTest(controllers = {
    CategoryController.class }, excludeFilters = @ComponentScan.Filter(type = FilterType.ASSIGNABLE_TYPE, classes = SecurityConfig.class), excludeAutoConfiguration = {
            SecurityAutoConfiguration.class })
@TestPropertySource(locations = "classpath:test.properties")
@DirtiesContext(classMode=ClassMode.AFTER_CLASS)
public class CategoryControllerTest {

    @Autowired
    private MockMvc mvc;

    @MockBean
    private Service_CategoryTB_impl service;

    private final String baseUri = "/api/categories";

    private final id = "1"

    @Test
    void getSpecificSuccessfully() throws Exception {

        // Given
        setMock(id);

        // When
        mvc.perform(get(baseUri + "/{id}", id)
                .contentType(MediaType.APPLICATION_JSON))
                .andExpectAll(
                        status().isOk(),
                        content().contentType(MediaType.APPLICATION_JSON),
                        jsonPath("$.codcat").value(id));
    }

    @Test
    void getAllSuccessfully() throws Exception {

        // Given
        setSeveralMock();

        // When and then
        mvc.perform(get(baseUri)
                .contentType(MediaType.APPLICATION_JSON))
                .andExpectAll(
                        status().isOk(),
                        content().contentType(MediaType.APPLICATION_JSON),
                        jsonPath("$").isArray());
    }

    @Test
    void createSuccessfully() throws Exception {
        // Given
        JSONObject cfDTO = createAsJson();

        // When and then
        mvc.perform(post(baseUri)
                .contentType(MediaType.APPLICATION_JSON)
                .content(cfDTO.toString())).andExpectAll(
                        status().isCreated(),
                        header().string("Location", baseUri + "/" id),
                        jsonPath("$.message").value("created."));
    }

    @Test
    void updateSuccessfully() throws Exception {

        // Given
        JSONObject cfDTO = createAsJson();

        // When and then
        mvc.perform(put(baseUri + "/{id}", cfDTO.getString("codcat"))
                .contentType(MediaType.APPLICATION_JSON)
                .content(cfDTO.toString())).andExpectAll(
                        status().isOk(),
                        header().string("Location", baseUri + "/" id),
                        jsonPath("$.message").value("modified."));

    }

    @Test
    void deleteSuccessfully() throws Exception {

        mvc.perform(delete(baseUri + "/{id}", id))
                .andExpectAll(
                        status().isOk(),
                        jsonPath("$.message").value("deleted."));
    }

    private JSONObject createAsJson() throws JSONException {

        JSONObject categoryJson = new JSONObject();
        categoryJson.put("codcat", id);
        categoryJson.put("descat", "desc");

        return categoryJson;
    }

    private void setSeveralMock() {

        CategoryIdDTO categoryTB = new CategoryIdDTO();
        categoryTB.setCodcat(id);

        when(service.findAll()).thenReturn(Arrays.asList(categoryTB));
    }

    private void setMock(String id) {
        CategoryIdDTO category = new CategoryIdDTO();
        category.setCodcat(id);
        when(service.findById(id)).thenReturn(category);
    }

}

And test.properties

spring.profiles.active=test

spring.datasource.driver-class-name=org.h2.Driver
spring.datasource.url=jdbc:h2:mem:db;DB_CLOSE_DELAY=-1
spring.datasource.username=sa
spring.datasource.password=sa

logging.pattern.console=


spring.autoconfigure.exclude= \ 
  org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration, \
  org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaAutoConfiguration, \
  org.springframework.boot.autoconfigure.jdbc.DataSourceTransactionManagerAutoConfiguration

The rest of the tests have practically the same structure. There's only one exception where a base64 image is managed during a test, but its size is minimun exactly for not making these tests too heavy.

I've followed this, this, an this questions. Even I found that there was a bug that seems to have been my very same problem. But following Spring Boot doc, it redirects to the latest version of junit, so I suppose that if I'm using the latest Spring Boot version, I should have the latest version of junit (or >5.3 at least). So the bug should be already solved.

Here is my POM

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.7.7</version>
        <relativePath /> <!-- lookup parent from repository -->
    </parent>
    <groupId>com.project</groupId>
    <artifactId>project</artifactId>
    <version>1.0</version> 
    <packaging>war</packaging>
    <name>project</name>
    <properties>
        <java.version>1.8</java.version>
    </properties>
    <dependencies>
        <!-- https://mvnrepository.com/artifact/org.modelmapper/modelmapper -->
        <dependency>
            <groupId>org.modelmapper</groupId>
            <artifactId>modelmapper</artifactId>
            <version>3.0.0</version>
        </dependency>

        <dependency>
            <groupId>com.h2database</groupId>
            <artifactId>h2</artifactId>
        </dependency>

        <dependency>
            <groupId>org.javassist</groupId>
            <artifactId>javassist</artifactId>
            <version>3.28.0-GA</version>
        </dependency>

        <dependency>
            <groupId>org.codehaus.jackson</groupId>
            <artifactId>jackson-core-asl</artifactId>
            <version>1.9.2</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.codehaus.jackson</groupId>
            <artifactId>jackson-mapper-asl</artifactId>
            <version>1.9.13</version>
            <scope>provided</scope>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-jdbc</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-jpa</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-validation</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-hateoas</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-devtools</artifactId>
            <scope>runtime</scope>
            <optional>true</optional>
        </dependency>
        <dependency>
            <groupId>com.oracle.database.jdbc</groupId>
            <artifactId>ojdbc8</artifactId>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-tomcat</artifactId>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-security</artifactId>
        </dependency>
        <dependency>
            <groupId>io.jsonwebtoken</groupId>
            <artifactId>jjwt</artifactId>
            <version>0.9.1</version>
        </dependency>

        <dependency>
            <groupId>org.json</groupId>
            <artifactId>json</artifactId>
            <version>20220320</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.security</groupId>
            <artifactId>spring-security-test</artifactId>
            <scope>test</scope>
        </dependency>

        <dependency>
            <groupId>org.springdoc</groupId>
            <artifactId>springdoc-openapi-ui</artifactId>
            <version>1.6.4</version>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>2.22.2</version>
                <configuration>
                    <properties>
                        <configurationParameters>
                            junit.jupiter.conditions.deactivate = *
                            junit.jupiter.extensions.autodetection.enabled = true
                            junit.jupiter.testinstance.lifecycle.default = per_class
                            junit.jupiter.execution.parallel.enabled = true
                            junit.jupiter.execution.parallel.mode.default = concurrent
                            junit.jupiter.execution.parallel.mode.classes.default = concurrent
                            junit.jupiter.execution.parallel.config.strategy = dynamic
                            junit.jupiter.execution.parallel.config.dynamic.factor = 2
                        </configurationParameters>
                    </properties>
                </configuration>
            </plugin>
        </plugins>
    </build>

</project>

I've tried every solution that I've found. Even solutions proposed by the comments of the questions mentioned. If someone can help me. Thanks in advance.

Note 1: The code is an aproximation of the original code, but it represents it close enough.

Note 2: If you can help me to speed up testing, it would be a bonus. The guy in the bug report claims that it executes ~2600 tests in 3 minutes and my tests take +20 minutes to execute. But in one of the questions, an answer suggets it's normal that a test suite takes several hours to complete.

How can I receive "double free" error when freeing only once? [closed]

I have this code here

#include <stdio.h>

#include "../Manager/input/aspire_read_input.h"

int main()

{

_Asp_Lexer *lexer = _asp_read_file("program.txt");

printf("File contents: \n%s\n", lexer->_file_contents);

printf("Data len: %lu\nInstruction len: %lu\n", lexer->_data_size, lexer->_inst_size);

for (int i = 0; i < lexer->_data_size; i++)

{

printf("data: %lu\n", lexer->data[i]);

}

for (int i = 0; i < lexer->_inst_size; i++)

{

printf("inst: %lu\n", lexer->instrs[i]);

}

free(lexer->instrs);

free(lexer->data);

free(lexer->_file_contents);

free(lexer);

}

Here in the line “free(lexer->data);”, it throws “double free” when that is the only line of code freeing the memory. The function “_asp_read_file()” is correctly allocating the memory as seen while debugging but the error still persists. I have checked tens of times but that line throws the error no matter the sequence in which i free the memory. If i comment out that line, it works! But memory leak! Something else that is not my code, is freeing the memory. I debugged and these are the function called, main calls free, free calls _int_free_merge_chunk which calls malloc_printerr which calls !_libc_message.cold which calls abort which calls raise which calls __pthread_kill_implementation. I assure you that that line is the only one line freeing the memory. If the function _asp_read_file had freed the memory, the loop below wouldn't have worked. Note: In _asp_read_file, lexer->instrs is allocated first followed by lexer->data.

I debugged tens of times, looked into everything i could find on the internet but no progress in solving the error.

EDIT: I solved the issue but it left me with more questions than answers. What I didn't do in the _asp_read_file function was close the opened file but how can that affect lexer->data?! I don't get it.

❌
❌