Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

MailChimp RSS campaign validation fails with Cloudflare, Wordfence and Wordpress

I've been using a MailChimp RSS campaign for several years but not posted for some time until yesterday. The campaign didn't send and I've paused it to check the config.

It's failing at the "RSS feed and send timing" step with the error "Connecting to url failed". The URL is using https.

I can access that URL from inside my network and it passes validation with https://validator.w3.org/feed. My environment is Wordpress with Wordfence and Cloudflare. All Wordpress plugins are current as is Wordpress and my theme. I reviewed Cloudflare/Security/Events and see nothing matching the timestamps when I've tried validating the feed URL via Mailchimp. I have set Cloudflare to development mode and also tried pausing Cloudflare. At the same time I've set Wordfence to learning mode and also reviewed the Wordfence security log with nothing showing up at this time. I've also disabled all plugins that affect content (cache and content optimisation etc) and disabling http redirects in Cloudflare. I've also checked Cloudflare trace to see if anything is being modified by a page rule.

It feels like it's failing before even connecting to my host - I am using Full (strict) SSL/TLS encryption mode and also tested with Off and Flexible modes.

What else can I do at my end?

Thank you for your advice.

Should I keep all the articles in rss or just the new ones?

Should I keep all the articles in rss feed or just new ones? There are too many articles on my website and a new one is added in every 10-20 minutes so what would be the best approach in this case. Keep all the entries and just add new ones or add new ones and delete old ones at the same time??? If the second is better how many entries should I keep?

How to Create a Sense of Anticipation on Your Blog

The post How to Create a Sense of Anticipation on Your Blog appeared first on ProBlogger.

How to Create a Sense of Anticipation on Your Blog

People read and then subscribe to blogs that they think will enhance their lives in some way in the future.

Many bloggers create a sense of anticipation on a blog quite instinctively – but there are numerous things that you can do quite intentionally to create anticipation and increase the chances of someone subscribing.

So how do you convince people that something that you’re yet to create is worth signing up for?

Today I want to share one effective strategy for building anticipation on a blog with some practical ideas on how to implement it. Like yesterday’s post – it’s not rocket science – but it is something that has worked for me.

Highlight Current and Past Quality Content

Probably the most convincing argument to a reader that you’ll write something that they can’t live without in the future is to have already written something that they have connected with.

Your current and past posts are your most effective advertisements for a continued relationship to those arriving on your blog.

As a result – one of the most effective strategies for creating anticipation on a blog is to put your best content in front of those visiting your blog – show them what you can do and let the quality of that work speak for itself.

Think back to to blogs that you’ve subscribed to lately – if you’re anything like me you’ve subscribed in most cases as a result of reading a post you thought was helpful, interesting, entertaining… etc

Most of us click the RSS feed icon or subscribe link based upon the quality of what we already read in the hope of seeing more of it.

So what’s the lesson here?

Actually there are two lessons – one is obvious and the other many fail to do.

1. The obvious one is to write great content and to do it regularly – its got to be your number 1 priority as a blogger.

2. The less obvious one is to put your best content into the view of those who are yet to subscribe to your blog – particularly first time visitors (who are crucial to target if your objective is to build the number of subscribers to your blog). Let me share a few ways you can do this.

How to Highlight Your Best Content

There are numerous ways to highlight your best content so and in doing so give people reason to subscribe to your feed.

DPS Sneeze1. Sneeze Pages – Perhaps the most useful technique that I can show you is to creating Sneeze Pages on your blog. I’ve recently done this on Digital Photography School. Look at the ‘Digital Photography Tips’ section in my sidebar (pictured left) – these links point to ‘sneeze pages’ that highlight my best and most popular content.

In having these sneeze pages I not only increase my page views – but I show new readers to my blog just how much I’ve already covered and hopefully increase the sense of authority and credibility that I have.

The subscription rate from users hitting these sneeze pages is extremely high (note – I have prominent ways to subscribe on these sneeze pages and the pages that they link to).

2. ‘Best of’ Sections – Another is to create sections in your sidebar or front page that highlight your best work. Check out this example from a previous design of the ProBlogger website, where we this is the ‘Best of ProBlogger’ section on my front page of this blog. This section is ‘hot’ – quite literally. Check out this heat map (taken a few months back using the CrazyEgg tool) of this section to see how many people click on it.

Heat Map

The benefits of this are numerous – but ultimately it’s about driving people to previously written quality content. My observations are that it’s these popular pages where many subscribers to my blog come from.

Since this screen shot was taken, ProBlogger was redesigned to create different themed sections that highlighted various themed articles even more. You can read more about how and why we changed the design of the ProBlogger website here.

3. Landing Pages – Another strategy is to use a plugin like Landing Sites to sense when a reader is arriving on your blog for the first time and showing them other posts you’ve written on the topic they are searching for.

This works well – particularly if you have a large archive – because someone arriving on your blog not only sees one post on the topic that they’re looking for but numerous (increasing the perception that you’re a comprehensive source of information on that topic).

How to Create a Sense of Anticipation on Your Blog

4. Interlink Posts – You should be regularly linking to your previous best quality posts in new posts. In doing this you constantly drive people to the pages where they see writing of a quality that is likely to convince them that you know what you’re talking about. The more pages that they view that they find useful the more chance of them subscribing.

But Wait There’s More

The key to the above four techniques is to send new readers to your highest quality and most helpful posts and then to present them with opportunity to subscribe on these posts (update: here’s my post with more tips on how to build anticipation on your blog).

However this highlighting content isn’t enough on it’s own.

It will definitely work to some degree but there are numerous other ways to create anticipation on a blog and to these I’ll be turning my attention tomorrow.

The post How to Create a Sense of Anticipation on Your Blog appeared first on ProBlogger.

HTTP Request to nasdaq RSS feed Works in Browser But Hangs with Code (C#, Node.js/Axios), Even with Identical Headers

I'm encountering a peculiar issue when trying to make an HTTP GET request to the URL 'https://www.nasdaq.com/feed/rssoutbound?category=FinTech'. When I manually enter this URL in my web browser, the feed loads without any issues. However, when I attempt to make the same request programmatically using code (I've tried C# with HttpClient and Node.js with Axios), the request hangs indefinitely and eventually times out.

Here's my C# code:

public async Task Execute()
{
    using HttpClient httpClient = new HttpClient();
    try
    {
        // Specify the URL you want to request
        string url = "https://www.nasdaq.com/feed/rssoutbound?category=FinTech";

        httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("*/*"));
        httpClient.DefaultRequestHeaders.AcceptLanguage.Add(new StringWithQualityHeaderValue("en-US"));
        httpClient.DefaultRequestHeaders.AcceptLanguage.Add(new StringWithQualityHeaderValue("en", 0.9));
        httpClient.DefaultRequestHeaders.Add("sec-ch-ua", "\"Chromium\";v=\"116\", \"Not)A;Brand\";v=\"24\", \"Microsoft Edge\";v=\"116\"");
        httpClient.DefaultRequestHeaders.Add("sec-ch-ua-mobile", "?0");
        httpClient.DefaultRequestHeaders.Add("sec-ch-ua-platform", "\"Windows\"");
        httpClient.DefaultRequestHeaders.Add("sec-fetch-dest", "empty");
        httpClient.DefaultRequestHeaders.Add("sec-fetch-mode", "cors");
        httpClient.DefaultRequestHeaders.Add("sec-fetch-site", "same-origin");

        // Add the User-Agent header
        httpClient.DefaultRequestHeaders.Add("User-Agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36");


        // Send a GET request and wait for the response
        HttpResponseMessage response = await httpClient.GetAsync(url);

        // Check if the request was successful
        if (response.IsSuccessStatusCode)
        {
            // Read the content of the response as a string
            string content = await response.Content.ReadAsStringAsync();

            // Print the content to the console
            Console.WriteLine(content);
        }
        else
        {
            Console.WriteLine($"HTTP request failed with status code: {response.StatusCode}");
        }
    }
    catch (Exception e)
    {
        Console.WriteLine(e);
        throw;
    }
}

I've also tried making the same request with Node.js and Axios, and the result is the same:

// Node.js code using Axios
const axios = require('axios');

async function fetchData() {
    try {
        const response = await axios.get('https://www.nasdaq.com/feed/rssoutbound?category=FinTech', {
            headers: {
                Accept: '*/*',
                'Accept-Language': 'en-US,en;q=0.9,tr;q=0.8',
                'sec-ch-ua': '"Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"',
                'sec-ch-ua-mobile': '?0',
                'sec-ch-ua-platform': '"Windows"',
                'sec-fetch-dest': 'empty',
                'sec-fetch-mode': 'cors',
                'sec-fetch-site': 'same-origin',
                'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36'
            }
        });
        // ... (handling the response)
    } catch (error) {
        console.error('An error occurred:', error.message);
    }
}

fetchData();

Strangely, when I attempt to retrieve feeds from other RSS sources, everything works fine. The issue seems to be specific to the Nasdaq feed. I've even tried using Puppeteer to fetch the content, and it also hangs indefinitely.

What could be causing this problem with the Nasdaq feed specifically? Is there something unique about their server configuration or the way they handle requests that might be causing this behavior? Any insights or suggestions would be greatly appreciated.

Cannot extra Image Url from Element Extensions

I'm trying to read from what Goodreads offers as an RSS feed and I'm struggling. It seems to work alright for the basic stuff like title and such, but I can't seem to pull the image URL out of the element book_image_url.

The feed is here.

Here's my code:

    var reader = XmlReader.Create(_goodreadsSettings.GoodreadsProfileUrl);
    var feed = SyndicationFeed.Load(reader);
    var feedItems = limit > 0 ? feed.Items.Take(limit) : feed.Items;
    var items = feedItems.Select(i =>
    {
        //NOTE: I'm doing this in this way because there is no namespace on these elements
        var syndicationElementExtensions = i.ElementExtensions
            .Where(e => e.OuterName == "book_image_url" && e.OuterNamespace == "").ToList(); //cannot preview the value of this variable in Visual Studio at all, so I can't tell what's happening
        var imageUrl = syndicationElementExtensions
            .Select(e => e.GetObject<XmlElement>().SelectSingleNode("book_image_url")?.InnerText)
            .FirstOrDefault();

        var link = i.Links.FirstOrDefault()?.Uri.AbsoluteUri;

        var imageFromDescription = GetImageFromDescription(i.Summary.Text);

        return new GoodreadsItem
        {
            Title = i.Title.Text,
            Summary = i.Summary.Text,
            Url = link,
            ImageUrl = imageUrl,

        };
    });
    return items;

Now, even though Visual Studio won't let me view the value of syndicationElementExtensions when I hover or use a Watch, I was able to view it by running the Where code in the Immediate Window, and it gave me this:

OuterName: "book_image_url"
OuterNamespace: ""
_buffer: {System.ServiceModel.XmlBuffer}
_bufferElementIndex: 1
_extensionData: null
_extensionDataWriter: null
_outerName: "book_image_url"
_outerNamespace: ""

This doesn't tell me much more, but clearly there's a value in there at one point. But then I can't seem to extra it using the GetObject<XmlElement> code on the next line. Is there something weird about the feed, do I need to do it another way?

❌
❌