Site Admin
  • Content count

  • Joined

  • Last visited

  • Days Won

  • Feedback


About refresh

  • Rank
  • Birthday 04/14/1972

Contact Methods

  • Website URL
  • ICQ

Profile Information

  • Location
    Brisbane, Australia

Recent Profile Visitors

3,759 profile views
  1. There's no way to clear server-side cache as there is no longer a server-side cache unless it's enabled via Memcache or similar, which is not installed on our server for system performance reasons. What looks to be happening is the old forums still exist within the forum software, but no user groups have access to them. Tapatalk is ignoring that and showing that they're still there. The solution is simply to delete them. I'll ask Rob what he wants to do there.
  2. I lost my brother in similar circumstances last year. He had serious mental health issues including paranoid schizophrenia likely triggered by decades of smoking dope, but refused to acknowledge it and rebuffed all attempts to help. God knows I tried and I did manage to get him into the mental healthcare system, paid off his debts and got him back into housing, but his last words to me despite all that were"You're a f'n c". He had a history of violence, so I wasn't prepared to put my children at risk to help a man who didn't want it, so I had to let him find his own way out of the dark. He drank very heavily to quiet the many voices in his head. That led to his early demise at 49 due to internal bleeding from a ruptured stomach after downing a bottle of vodka, one of many found in his apartment after his death. We initially believed he died suddenly, likely the result of a ticking timebomb in his brain after he decided he could do a handstand off a third-storey balcony 25 years ago. I caught him the first few times, but a determined fool will eventually prevail and I had to scrape his broken and bloody body off the pavement below into an ambulance. He was incredibly lucky to survive that incident as the entire top of his skull was a fractured mess, so it was reasonable to believe that a blood clot could have been lurking unseen that reared its ugly head and knocked him off instantly and quietly. Alas, that wasn't to be - a coroner's report a month ago showed he died in agony, bleeding out from the inside and wasn't found for several days. So yes, while I agree with the sentiment that we should be there for friends and family, you can't help someone who refuses it and you can't blame yourself when they ultimately get what they want. We console ourselves that at long last he is at peace, but it's the survivors who are left damaged in the wake. He was my big brother, my childhood hero and I'll love him until the day I die, but you can't go through supporting people with mental health issues (my mother's also a sufferer...a story for another day) without losing a piece of yourself along the way. You have to be so strong for them, frequently for years at a time, that it eventually takes its toll on your own mental health. In my view, the focus of mental health should not only be on the sufferers, but on the suffering they cause to their nearest and dearest, because they're the ones at greatest risk of falling down the same hole.
  3. I posted this in another thread, it's equally useful here. You can subscribe to a forum via RSS, but receive the notification by email. Head to, sign up, then setup a RSS to email applet -
  4. You don't even need an app. Head to, sign up, then setup a RSS to email applet - I use it for data centre outage notifications.
  5. I don't think it's possible on private forums. Broadcasting the contents defeats the purpose of being private, at least that's the logic applied by the software devs. Oddly enough, that's exactly what they're doing in the RSS feed. Go to the CUNTAS forum and subscribe to that instead. There are RSS to email converters out there. I think IFTTT can do it.
  6. I can run a cache update in the back-end to see if that clears Tapatalk. Leave it with me, I'll give it a bash in the morning.
  7. I bribed them Ken to increase deliverability.
  8. You can hit the minus sign to the top-right of the Chatbox to minimise it.
  9. It looks like someone's removed it. Easy enough to add back in, but I'll see if it was intentional or not.
  10. Bingo. Either follow the Water Hole forum at the top, or use the RSS button at the bottom. Follow will send an email notification, RSS will update an RSS reader app.
  11. We send over 60,000 email notices out from the FOH forums per month via Sparkpost. Of those 2,000 daily emails, about 9% bounce and 8% are rejected by the receiving mail hosts. Gmail is by far the largest recipient at 45%. In short, we're gold as far as the server sending emails and peeps getting them, but some don't get through due to being blocked or bounced at the recipient end. Please add to your email whitelist to improve delivery rates. Given the 24:24 threads are new every day (the old ones are deleted), it's impossible to subscribe to those specific threads ahead of time, so follow the Water Hole forum to receive email notices of new threads within it. That should do the job and I can see 217 members already do it. If you're still not getting them, poke around at the Gmail end, because we're definitely sending them An alternative to avoid the risk of missing an email is to subscribe to the RSS feed for the Water Hole, which updates whenever new threads are posted. That of course means using RSS software, but they can typically be configured to notify via email or desktop notification when a new post is received, depending on how frequently you set it to ping the RSS provider. If your RSS app doesn't support email notifications, use something like IFTTT and Pushover -
  12. Just checking in to see how the site has been performing. AFAIK, everything's been pretty good for the last couple of weeks since we ditched the Sitelock CDN, which was causing us more problems than it solved. The root of the server performance issue is not looking like it was traffic capacity when the 24:24 thread is posted, but a faulty MySQL configuration failing during the 24:24 upload. It also appears that MySQL wasn't setup to handle the number of threads we are tossing at it during the spike. Both issues combined to cause a nasty fubar 2-3 times a week. We tested a lot of stuff during this ordeal to pin down the source of the problem. My gut told me something was causing a memory leak (we ruled out pretty much everything else) and I'm confident I was close to the mark. We've got some server optimisations underway that should resolve this bottleneck and hopefully sort this out once and for all. Those changes may cause other unforeseen issues, so please let me know if you see anything odd going on in the coming days. There will likely be a short outage to reboot the server after MySQL is updated, but other than that it should be business as usual.
  13. Thanks gents. The Sitelock page is our content distribution network unable to serve the 24:24 thread due to the volume of members madly clicking refresh to get in before everyone else. The Sitelock page won't display for everyone and if you see it, it should clear within a few minutes of the 24:24 thread going live. Corylax and Akela3rd, hitting refresh should clear the Sitelock page, which may be stuck in your web browser cache. To force a page refresh, type Ctrl+F5 or hold Ctrl while left-clicking the refresh button. Until we get more resources to handle the madness, this will continue if everyone keeps clicking the crap out of the refresh button when the 24:24 thread goes live. It spikes the concurrent CPU threads by 500% and overwhelms 24 CPUs and 16 GB of RAM. We doubled server resources on Friday and it looks to have made no difference. We're reviewing a proposal from our webhost today regarding migrating to Amazon Web Services to handle the 24:24 traffic spike and will get this fixed asap! If there was a better way to post the 24:24 specials, we'd do it.
  14. That's our window of chaos. We're pretty confident we know what we need to do now and a solution is in the works. As a Kiwi supermodel once said, "It won't hippen overnight, but it will hippen." Please bear with us for the next week or two (at a guess) while we sort things out behind the scenes.
  15. Thanks for the update MrGlass, Rob reported the same thing this morning. I think we can rule out the upload, the 24:24 thread was pre-loaded an hour early, then the site crashed when it was set live. The click frenzy is still overwhelming 24 CPUs and 16 GB of RAM, so we have our work cut out trying to get enough resources to handle the spike. We're investigating Amazon Web Services to improve our capacity to ride it out. Thanks for your patience everyone, we'll get there in the end.

Community Software by Invision Power Services, Inc.