Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Twitter Throttling Hits Third-Party Apps

timothy posted about 4 years ago | from the you-should-take-some-thritalin-perhaps dept.

Social Networks 119

Barence writes "Twitter's battle to keep the microblogging service from falling over is having a dire affect on third-party Twitter apps. Users of Twitter-related apps such as TweetDeck, Echofon and even Twitter's own mobile software have complained of a lack of updates, after the company imposed strict limits on the number of times third-party apps can access the service. Over the past week, Twitter has reduced the number of API calls from 350 to 175 an hour. At one point last week, that number was temporarily reduced to only 75. A warning on TweetDeck's support page states that users 'should allow TweetDeck to ensure you do not run out of calls, although with such a small API limit, your refresh rates will be very slow.'"

cancel ×

119 comments

Sorry! There are no comments related to the filter you selected.

Monty Python (0)

Anonymous Coward | about 4 years ago | (#32830558)

Didn't Monty Python have a twit race? It that related?

Re:Monty Python (-1, Flamebait)

Anonymous Coward | about 4 years ago | (#32830612)

Eat my balls you faggot.

Re:Monty Python (4, Insightful)

spazdor (902907) | about 4 years ago | (#32830950)

Company bases a business model on offering their resources for free, only to discover to their chagrin that people will take them up on it. Where oh where have I heard this one before?

175/hr is slow? (4, Insightful)

rotide (1015173) | about 4 years ago | (#32830566)

Isn't that an update nearly every 20 seconds? How fast do people need to see that you're currently wiping your butt?

Re:175/hr is slow? (5, Insightful)

the_one_wesp (1785252) | about 4 years ago | (#32830636)

If you're only following a single feed. But I have like 10 lists in TweetDeck that all get individually queried, and there are some who have WAY more than that.

But I am inclined to comment about this bit of "news"... Big. Woop. Twitter's just trying to stay alive. If the service falls over NO UPDATES will happen... at all... Inconvenient, yes, but totally necessary.

Re:175/hr is slow? (0)

Anonymous Coward | about 4 years ago | (#32830670)

If the service falls over, then people will simply migrate to something better, like Google Buzz.

Re:175/hr is slow? (1)

jgagnon (1663075) | about 4 years ago | (#32830710)

Incoming conspiracy theory in 3, 2, 1...

Re:175/hr is slow? (0)

Anonymous Coward | about 4 years ago | (#32831078)

Dude... no one said ANYTHING about Obama's NASA/Russia plans and how they affect Twitter, so it's not worth even bringing it up.

Re:175/hr is slow? (1)

the_one_wesp (1785252) | about 4 years ago | (#32830730)

That wasn't to say Twitter's tighter throttling was a permanent solution, or even a good one. I would hope they're looking at server and network upgrades to better fix the issue. But, yes you're correct. All the more reason for Twitter to not die.

Re:175/hr is slow? (1)

RollingThunder (88952) | about 4 years ago | (#32830768)

Thanks for that, the article wasn't clear if it was X/hr for an app run by a given user in total, or X/hr for that app for a given person they're following.

It seems inefficient that TweetDeck is sending 10 different requests; can Twitter's API not handle a "tell me if anyone I'm following has updated" request, to allow 10 requests to be rolled in to one? Admittedly, that would put additional burden on the twitter servers to keep track of what "anyone I'm following" means.

Re:175/hr is slow? (0)

Anonymous Coward | about 4 years ago | (#32831170)

Admittedly, that would put additional burden on the twitter servers to keep track of what "anyone I'm following" means.

But they already need to do that. It's just a matter of bundling all the extra bits of information together into one "please send me new stuff" API call. Something like CouchDB's views would probably work nicely.

Re:175/hr is slow? (3, Informative)

Alex Zepeda (10955) | about 4 years ago | (#32831240)

The API rate limit is per hour per user (if authenticated) and per IP if not authenticated. Unfortunately the Twitter API does not allow you to aggregate requests even if their web site does (e.x. status updates for all of the people I'm following and all of the things people I'm following have retweeted). If you go through the API docu, you'll find all sorts of horrid seeming inefficiencies and awkwardness with the API.

For instance when you request a status (or a list of statuses or whatever) you'll get back: the contents of the tweet, the user name, user id, URL for the user avatar, URL for the user's profile page background image, whether that user is following you, their real name, the number of tweets that user has made, and so-on and so forth. A lot of this information could easily be cached by the client, but is instead sent for every tweet you get back.

Re:175/hr is slow? (1)

Bakkster (1529253) | about 4 years ago | (#32830772)

If you're only following a single feed. But I have like 10 lists in TweetDeck that all get individually queried, and there are some who have WAY more than that.

So why does it take multiple API queries? Shouldn't it grab everything all at once, perhaps with the option to prioritize the current window worth of results? I can understand requiring a separate call to view profiles, or perform a search, but whether I'm following 1 person or 100 people, I expect it should take just 1 call to receive all the tweets.

Re:175/hr is slow? (1)

jeffmeden (135043) | about 4 years ago | (#32831142)

I think he means that he has TweetDeck set up to monitor several lists (a feature in twitter to allow a feed of updates from users not on your main "follow" list) which means that the requests for what's in each list will have to happen independently. Correct me if I'm wrong...

What Twitter really needs to do is require heavy-hitting API-using apps like TweetDeck to maintain their own mirror of tweet activity, a replicated database of some sort that way when users are craving full length updates of ten lists of one hundred people each they can cripple the app instead of crippling the whole Twitter system. Twitter can, in theory, easily handle incoming messages and simple list updates; it's when all these elaborate tools get involved with pinging for updates at a crazy rate that things get hectic.

Re:175/hr is slow? (1)

imunfair (877689) | about 4 years ago | (#32831308)

Yeah their API is very simplistic - if they made some changes this wouldn't be an issue.

As it stands right now, if you wanted to check all your messages, once per minute you would need
1 Request for Direct Messages
1 Request for messages @ you
1 Request for timeline (people you follow)
1 Request for each list's timeline (you can list people you don't follow)

So if the user has 5 lists that's potentially 8 requests every 60 seconds.

The other problem is that you can't get more than 200 messages per request - so if one of those timelines produces a lot of messages that you want to run through a filter (for instance) - then you may need multiple requests to fetch all messages since you last updated.

What they should probably do is make an all inclusive query that allows you to specify that you want DM,Timeline,LIST1,LIST45 updates, and it would provide you with all of those, with an XML/JSON field that indicated what the message source was. They would have to increase the messages per request limit to something reasonable like 1000-5000, and allow requesting say 10 different sources at once.

Re:175/hr is slow? (1)

Bakkster (1529253) | about 4 years ago | (#32831970)

What they should probably do is make an all inclusive query that allows you to specify that you want DM,Timeline,LIST1,LIST45 updates, and it would provide you with all of those, with an XML/JSON field that indicated what the message source was. They would have to increase the messages per request limit to something reasonable like 1000-5000, and allow requesting say 10 different sources at once.

I think 200 messages is a reasonable return rate, if only one universal query is required, instead of multiple. That's still an average of one API call per 20 seconds (say, query all every minute, that still allows two other calls per minute for messages out or overflow updates) which should be enough for most reasonable uses.

If you're getting more than 200 messages per minute, there might need to be a reevaluation of your Twitter usage. That's a lot for an individual to read. Anyone who wants to do something aggregating Tweets would of course be blocked, but I can't blame Twitter for wanting to stop 3rd party aggregators.

Re:175/hr is slow? (1)

imunfair (877689) | about 4 years ago | (#32832238)

200 isn't a lot if you're doing filtering on the client to weed out messages you didn't want to see. Not everything a person tweets may be interesting, you might want a subset of their tweets - or maybe all tweets without URLs, etc.

This is especially true related to language filtering, which is currently broken server-side on Twitter (and has been for OVER SIX MONTHS). I guess I shouldn't be surprised, considering that even their website has a minimum of one bug every time I try to use it.

Re:175/hr is slow? (1)

dunng808 (448849) | about 4 years ago | (#32832618)

I (@garydunn808) use TweetCaster on my Android phone with refresh interval at 30 min. Two interuptions per hour is all my right brain can handle.

Re:175/hr is slow? (1)

Zen (8377) | about 4 years ago | (#32831306)

This is really how it works? Come on, what decade is this? I've been on the user side and now I'm on the vendor side of packet based application performance products. Think wireshark or the defacto standard certain brand name that jumps into your head. A primary part of the job is showing people how inefficient their database calls are when they either ask for everything every time and don't cache it, or they get tiny bits and pieces a few bytes at a time instead of larger more efficient downloads.

So Twitter can't bundle multiple requests into the same stream? It's not exactly rocket science - even SNMP can do this now. It saves processing power, bandwidth, time to load, etc. Pretty crazy.

Re:175/hr is slow? (1)

blair1q (305137) | about 4 years ago | (#32832192)

If twitter doesn't want to fall over, it should stabilize itself by scaling up its hardware, not by jabbing its cane into its users' asses.

Because that's a good way to stay upright, and be alone.

Re:175/hr is slow? (1)

gknoy (899301) | about 4 years ago | (#32833082)

If you're only following a single feed. But I have like 10 lists in TweetDeck that all get individually queried, and there are some who have WAY more than that.

It seems somewhat silly if you need to check feeds separately. Why not say, "Have A,B,C, or D said anything?", and get a batch of replies? After all, in theory Twitter already knows who you are following, so you probably don't even need to ask for most things.

Also, why do we care about per-second updating from Twitter? Perhaps some people do -- make it a premium service. For others, why not do a batch check every minute, or 30 seconds? If you're not in the middle of a conversation, there's a low chance that you need to be notified immediately, right?

Surely I'm missing something; please enlighten me. :)

Re:175/hr is slow? (1)

jo42 (227475) | about 4 years ago | (#32833768)

Twitter's just trying to stay alive. If the service falls over NO UPDATES will happen

What's the expression I'm looking for? Oh, yeah: "And nothing of value would be lost."

Re:175/hr is slow? (1, Insightful)

Anonymous Coward | about 4 years ago | (#32830668)

A RSS reader will consume one update per person you follow on twitter. Following a moderate number of people with a 15 minute refresh will easily break the cap.

Re:175/hr is slow? (5, Funny)

copponex (13876) | about 4 years ago | (#32830708)

Isn't that an update nearly every 20 seconds? How fast do people need to see that you're currently wiping your butt?

It seems you have forgotten how full of shit the average Twit is.

Re:175/hr is slow? (1)

helix2301 (1105613) | about 4 years ago | (#32831226)

I never realized how popular twitter had gotten. I use twitter for blogging but that's about it. None of my friends are on twitter so I don't spend that much time on it. I like twitter it's a really cool service.

popular != valuable (1)

AliasMarlowe (1042386) | about 4 years ago | (#32831392)

Nothing of value was lost, since nothing of value was ever present.
Move along to the next topic, please.

And this matters why? (0)

Anonymous Coward | about 4 years ago | (#32830580)

Oh no, I'm limited to once a minute updates! God forbid!

75 updates per hour (4, Interesting)

VisiX (765225) | about 4 years ago | (#32830590)

Any information that needs to be distributed more than once per minute probably shouldn't be relying on twitter.

Re:75 updates per hour (0)

Anonymous Coward | about 4 years ago | (#32830662)

You can't use all your 75 calls for automatic updates. This means the first time you try custom search or see details of a profile, or *anything*, you run out of calls.

Realistically I'd say with a budget of 75 calls you'd use 20 for updates, so once per 3 mins. Still not a disaster, but getting further away from "realtime" as well, which is a key use of Twitter in the first place.

Re:75 updates per hour (1, Insightful)

Anonymous Coward | about 4 years ago | (#32830986)

You need to think in terms of API calls. If it takes an API call to get one of your follower's updates, following 100 people could push your refresh rate over an hour.

Re:75 updates per hour (1)

data2 (1382587) | about 4 years ago | (#32832672)

Does it really work this way rather than one call to get all new messages of people you are following?

Are They Employing an Event/Listener Paradigm? (5, Informative)

eldavojohn (898314) | about 4 years ago | (#32830598)

Disclaimer: I'm not familiar with the Twitter API. If the assumptions I make are wrong, I apologize.

Over the past week, Twitter has reduced the number of API calls from 350 to 175 an hour.

Okay, if you're making that many calls to Twitter then there might be an inherent flaw with their RESTful interfaces. I think for a long time, the "web" as we know it has suffered from the lack of the Event/Listener paradigm. This is a pretty simple design concept that I'm going to refer to as the Observer [wikipedia.org] . Let's say I want to know what Stephen Hawking is tweeting about and I want to know 24/7. Now if you have to make more than one call, something is wrong. That one call should be a notification to Twitter who I am, where you can contact me and what I want to keep tabs on--be it a keyword or user. So all I should ever have to do is tell Twitter I want to know everything from Stephen Hawking and everything with #stephenhawking or whatever and from that point on, it will try to submit that message to me via any number of technologies. Simple pub/sub [wikipedia.org] message queues could be implemented here to alleviate my need to continually go to Twitter and say: "Has Stephen Hawking said anything new yet? *millisecond pause* Has Stephen Hawking said anything new yet? *millisecond pause* ..." ad infinitum. I'm not claiming Twitter does this but a cursory glance at the API [twitter.com] looks like it's missing this sort of Observer paradigm that allows for the scalability they need.

I'm not leveling the finger at Twitter, it's a widespread problem that even I have been a part of. Ruby makes coding RESTful interfaces so easy that it's very very tempting to just throw up a few controllers that are basically CRUD interfaces for databases and to call it a day. I suspect that Twitter is feeling the impending pain of popularity right about now ...

Re:Are They Employing an Event/Listener Paradigm? (1, Funny)

Anonymous Coward | about 4 years ago | (#32830692)

Informed opinions based on logical thought and understanding of programming concepts are not welcome here.

You must be new.

Re:Are They Employing an Event/Listener Paradigm? (1, Informative)

Anonymous Coward | about 4 years ago | (#32830742)

They're working on it. They have a streaming API in beta right now.

Re:Are They Employing an Event/Listener Paradigm? (1)

The MAZZTer (911996) | about 4 years ago | (#32830798)

The biggest problem is that when Twitter (or whoever) goes to deliver the update, at the user's home network a router or firewall will block Twitter from connecting. Of course this can be overcome if the client sends a heartbeat packet via UDP at regular intervals to Twitter so that the router thinks you're actively communicating, so when Twitter pushes data back via UDP the router knows who it's for and lets it in.

Of course, UDP isn't exactly a standard web tool. I know ASP.NET supports it through .NET, PHP supports it through its socket_* functions, but some web-based clients such as Chrome extensions can't do UDP. I dunno if Adobe AIR can or not.

Re:Are They Employing an Event/Listener Paradigm? (0)

Anonymous Coward | about 4 years ago | (#32832538)

The biggest problem is that when Twitter (or whoever) goes to deliver the update, at the user's home network a router or firewall will block Twitter from connecting.

You're the type of person who invented FTP, aren't you?

Why would Twitter need to "connect" to the clinet? Is TCP not stateful?

You open a single TCP stream to the server, tells the server what they want, and receive the updates on that stream. No need for the server to "connect" to the client.

Re:Are They Employing an Event/Listener Paradigm? (2, Interesting)

Late Adopter (1492849) | about 4 years ago | (#32830866)

I agree, that's the "right" way to tackle subscription mechanisms. But it's not the right way to tackle Twitter, because one of the defining features of Twitter is its ubiquity: i.e. if you have a phone/computer/netbook that's capable of running any sort of app whatsoever, you can run a Twitter app. As it stands now to write a Twitter client, you need to be able to do HTTP GET requests (every modern environment provides for this) and parse XML. That's it. But to do pub/sub, you'd presumably need to be able to listen, which you can't always do, say, on a smartphone or a Firefox extension.

Re:Are They Employing an Event/Listener Paradigm? (2, Informative)

Late Adopter (1492849) | about 4 years ago | (#32830908)

Bad form to reply twice, but I forgot something rather crucial: the "right" way to do this sort of thing might be to offer notifications over XMPP (i.e. Jabber/GTalk). Twitter used to do this, but they couldn't figure out how to keep it running under heavy load (which I would consider a fault on their end rather than as a fault in XMPP as a solution).

XMPP would at least take advantage of established listening pathways (GTalk clients on mobile devices, etc).

Re:Are They Employing an Event/Listener Paradigm? (0)

Anonymous Coward | about 4 years ago | (#32832098)

The problem with Twitter using XMPP to push info, is that XMPP is really the right solution for most Twitter uses. The last thing Twitter wants is to "legitimize" XMPP, acknowledge that it exists, etc.

Uhh, what are you talking about? (1, Interesting)

Anonymous Coward | about 4 years ago | (#32830960)

What are you talking about? That pattern works well under very controlled circumstances, like UIs, but falls apart over networks.

What happens when a client is behind a heavily NATed network, or behind a firewall, or forced to use a proxy? Twitter can't contact them directly to push the new data. That's one of the benefits of the web; the client pulls the data, rather that it being pushed to them, which often isn't an option.

What about clients who don't have a constant connection to the Internet, or who have a dynamic IP? Now twitter has to poll them, to see if they exist. You end up with the same situation, except worse.

What happens when devices disappear, but no longer alert twitter to them no longer being a subscriber? If my smartphone gets run over by a truck, it won't have a chance to alert twitter that it's no longer a subscriber, because it'll be totally fucked up. Will twitter keep trying to push updates to it constantly?

This is a good example of why software design patterns are dangerous. People like you don't seem to understand where they can apply, and where they totally fall apart.

Re:Uhh, what are you talking about? (3, Insightful)

Anonymous Coward | about 4 years ago | (#32831522)

What about clients who don't have a constant connection to the Internet, or who have a dynamic IP? Now twitter has to poll them, to see if they exist. You end up with the same situation, except worse.

E-mail seems to be doing just fine, despite these "shortcomings".

Re:Uhh, what are you talking about? (1)

auLucifer (1371577) | about 4 years ago | (#32832620)

But don't imap and pop standards require a user to fetch the data? To poll the server for new mail? Just like polling for tweets which is what the suggestion was trying to avoid. How does email avoid that problem and be comparable to the issue at hand?

Re:Uhh, what are you talking about? (2)

bored (40072) | about 4 years ago | (#32831630)

Yawn, just because the client isn't polling (REST, is just a way of saying polling to make people feel better), doesn't mean this doesn't work on just about every damn device out there. TCP keep-alives are supported by all the major TCP stacks and all the minor ones I've ever used (although not strictly required per RFC 1122). With reasonable configuration parameters for maintaining connections with little data transfer, its possible to keep a port open for basically an indefinite time period. Once the port is open, its going to consume server resources (and having more than a few 10k ports per IP is a problem, and is itself probably a good reason for having some kind of periodic queue poll type mechanism), but its going to significantly lower the bandwidth vs a polling mechanism.

That said, a big part of the problem is HTTP, and the insistence to use it for a API data transport even when its not well suited for such. Even worse though is the use of web servers like apache that consume significant resources for keep alive transactions. Frankly, though to be fair Apache was designed more for an environment where a lot of different machines were connecting for short periods of time, and then they were done. The http 1.1 keepalive mode didn't mesh well with the one process per connection model, and works only marginally better using the one thread per connection model now in use.

So, basically I don't think any of your arguments hold. Even over actual network failures, client standby, network changes, etc. The client will be notified of connection loss and can simply reconnect. Once reconnected, queued notifications can be issued, or the client can repoll before reconstructing the notification system.

Frankly, as someone who works with extremely high band-width (many GBytes/sec), high IO rate systems (100k/sec transactions) per node, I'm shocked at the problems twitter has. Fundamentally, i'm betting someone who didn't have to deal with the the BS could get the whole system running on a few fairly high power server nodes. The entire data set probably could be fit in RAM on a modern high end server. Its not like they are moving a lot of multiple MB messages around, or running really complex searches.

Just imagine what google would be like if written the same way.

Re:Are They Employing an Event/Listener Paradigm? (1)

John Whitley (6067) | about 4 years ago | (#32831050)

Simple pub/sub [wikipedia.org] message queues [...]

And therein lies the rub. The problem is that pub/sub message queues are neither simple nor scalable as the problem size increases. From the WP article you cited:

As noted above, while pub/sub scales very well with small installations, a major difficulty is that the technology often scales poorly in larger ones.

You pretty much don't get a larger organization than the public internet, assuming that a service becomes popular enough. I've seen these problems myself in one particular large internet company I worked for. Message queue systems are great tools, but there are a ton of practical problems in implementing an event/push based system broadly.

On the up-side, companies like Twitter can (must) take advantage of the fact that they don't need an all-singing, all-dancing pubsub solution. There may be approaches that will work well with the proper problem constraints, much as how eventually-consistent systems can gain advantages over always-consistent solutions when the problem domain permits that.

Re:Are They Employing an Event/Listener Paradigm? (5, Informative)

Animats (122034) | about 4 years ago | (#32831262)

Now if you have to make more than one call, something is wrong. That one call should be a notification to Twitter who I am, where you can contact me and what I want to keep tabs on--be it a keyword or user.

That's not easy to do on a large scale. A persistent connection has to be in place between publisher and subscriber. Twitter would have to have a huge number of low-traffic connections open. (Hopefully only one per subscriber, not one per publisher/subscriber combination.) Then, on the server side, they'd have to have a routing system to track who's following what, invert that information, and blast out a message to all followers whenever there was an update. This is all quite feasible, but it's quite different from the classic HTTP model.

It's been done before, though. Remember Push technology [wikipedia.org] ? That's what this is. PointCast sent their final news/stock push message [cnet.com] in February 2000. There's more support for "push" in HTML5, incidentally.

If you really wanted to scale this concept, the thing to do would be to rework a large server TCP implementation so that it used a buffer pool shared between connections, rather than allocating buffers for each open connection. The TCP implementation needs to be optimized for a very large number of mostly-idle connections. Then implement an RSS server with slow polling, so that the client makes an RSS query which either returns new data, waits for new data, or times out in a minute or two and returns a brief "no changes" reply. Clients can then just read the RSS feed, and be informed immediately when something changes. A single server should be able to serve a few million Twitter-type users in this mode.

The client side would encode what it was "following" in the URL parameters. The server side needs a fabric between data sources such that changes propagate from sources to front servers quickly, and then on each front server, all the RSS feeds for all the followers for the changed item get an update push.

There's a transient load problem. If you have 50,000,000 users, each following a few hundred random users, load is relatively uniform and it works fine. If you have 50,000,000 people following World Cup scores, each update will force 50,000,000 transactions, all at once. All the clients get a notification that something has changed. So they immediately make a request for details (the picture of someone scoring, for example). All at the same time. However, if you arrange things so that the request for details hits a server different from the one that's doing the notifications, ordinary load-balancing will work.

Re:Are They Employing an Event/Listener Paradigm? (0)

Anonymous Coward | about 4 years ago | (#32831988)

That's not easy to do on a large scale. A persistent connection has to be in place between publisher and subscriber. Twitter would have to have a huge number of low-traffic connections open. (Hopefully only one per subscriber, not one per publisher/subscriber combination.) Then, on the server side, they'd have to have a routing system to track who's following what, invert that information, and blast out a message to all followers whenever there was an update.

They are doing precisely that, in fact:

http://thenextweb.com/socialmedia/2010/07/06/twitter-user-streams-on-the-way-better-application-updates-soon-to-follow/ [thenextweb.com]

Re:Are They Employing an Event/Listener Paradigm? (1)

lennier (44736) | about 4 years ago | (#32832414)

It seems like a standardised pub/sub protocol ought to be cacheable, and everyone has an ISP, and ISPs themselves take feeds from networks - so wouldn't it make sense for every local network to have a proxy-like box which subscribes to feeds requested downstream, and therefore reduce the load on upstream boxes?

An open, fully decentralised infrastructure like that would probably come out looking like Usenet, but secure and for micro-transactions. And that seems like it ought to be a much smarter way of doing things than Twitter.

Re:Are They Employing an Event/Listener Paradigm? (1)

Animats (122034) | about 4 years ago | (#32833154)

It seems like a standardised pub/sub protocol ought to be cacheable, and everyone has an ISP, and ISPs themselves take feeds from networks - so wouldn't it make sense for every local network to have a proxy-like box which subscribes to feeds requested downstream, and therefore reduce the load on upstream boxes?

Like NNTP [wikipedia.org] .

Re:Are They Employing an Event/Listener Paradigm? (1)

mibus (26291) | about 4 years ago | (#32833746)

Sounds a lot like something that might get solved with wider application of XMPP and PubSub...

Re:Are They Employing an Event/Listener Paradigm? (0)

Anonymous Coward | about 4 years ago | (#32831574)

> Ruby makes coding RESTful interfaces so easy Technically, it's Rails that makes coding RESTful interfaces so easy.

Re:Are They Employing an Event/Listener Paradigm? (0)

Anonymous Coward | about 4 years ago | (#32831728)

Their RSS feeds are subject to these limits and are inherently unable to support Event/Listener.

Re:Are They Employing an Event/Listener Paradigm? (1)

thePowerOfGrayskull (905905) | about 4 years ago | (#32831772)

In the right direction, but not what's needed. Something more simple would work - esp. in the context of twitter. The primary API should be simple: "give me updates for my profile". This would include anything you subscribed to - and even allow you to control (via twitter) how often you wanted to receive updates from any given person. This would include friends, lists, search subscriptions, etc. The data returned should be provided in a way that the client can filter/sort appropriately.

Of course other APIs would still be necessary -- "get timeline for person X", "add friend", etc; but those don't make up the majority of usage.

This problem isn't just one with RESTful interfaces though. It seems to be common in many enterprise sytems as well: let's make many small function points that give clients more control, instead of using our brains to figure out *how* the data will be getting used. This way, the publisher doesn't have to do anything like thinking - they can put the onus of that on the client. In some situations that works well - but when (as you said) you're just acting as a simple CRUD front end, you're adding little value and may even be making things more difficult for your clients.

Re:Are They Employing an Event/Listener Paradigm? (0)

Anonymous Coward | about 4 years ago | (#32831790)

Event/Listener is not RESTful.

Event/Listener requires state (the list of listeners).
REST is stateless.

Re:Are They Employing an Event/Listener Paradigm? (3, Informative)

DragonWriter (970822) | about 4 years ago | (#32831956)

Okay, if you're making that many calls to Twitter then there might be an inherent flaw with their RESTful interfaces. I think for a long time, the "web" as we know it has suffered from the lack of the Event/Listener paradigm. This is a pretty simple design concept that I'm going to refer to as the Observer [wikipedia.org].

For messaging architectures (like, say, the internet), the pattern is usually described as "Publish/Subscribe". All serious messaging protocols support it (XMPP, AMQP, etc.) and some are dedicated to it (PubSubHubbub). The basic problem with using it the whole way to the client is that many clients are run in environments where it is impractical to run a server which makes recieving inbound connections difficult.

There are fairly good solutions to that, mostly involving using a proxy for the client somewhere that can run a server which holds messages, and then having the client call the proxy (rather than the message sources) to get all the pending messages together.

I'm not leveling the finger at Twitter, it's a widespread problem that even I have been a part of. Ruby makes coding RESTful interfaces so easy that it's very very tempting to just throw up a few controllers that are basically CRUD interfaces for databases and to call it a day.

Given what's been published about Twitter in the past (including them at one point building their own message queueing system because none of the existing ones that they tried seemed adequate), I don't think what they've done is as simplistic as that on the back-end, though they be forcing third-party apps through an API which makes it seem like that's what is going on (and produces inefficiencies in the process.)

Re:Are They Employing an Event/Listener Paradigm? (0)

Anonymous Coward | about 4 years ago | (#32832482)

Event/Listener requires peer connections to make callbacks.
Web services are client/server request/response.

Event/Listener requires state.
REST is stateless.

sub/pub solves the problems they are having, but they are out of scope since that would no longer be a web API.

Re:Are They Employing an Event/Listener Paradigm? (1)

mtxf (948276) | about 4 years ago | (#32832720)

The Twitter API does indeed cover the kind of thing you're talking about. If you scroll down to the very bottom of that API page you linked you'll see a link to the "Streaming API", http://dev.twitter.com/pages/streaming_api [twitter.com]

This allows you to receive tweets in real-time over a persistent HTTP connection.

It's rather well hidden though, perhaps they don't want people finding out about it for whatever reason (performance?).

Oh no (-1, Flamebait)

Anonymous Coward | about 4 years ago | (#32830608)

Maybe tedious, narcissistic fucks will be forced to shut the fuck up for a little while.

Re:Oh no (0)

Anonymous Coward | about 4 years ago | (#32831524)

But the Anonymous Cowards will NEVER quit!!!

Why Are The APIs Hitting It So Much? (0)

Anonymous Coward | about 4 years ago | (#32830616)

Why are the APIs hitting it that much? That's like almost one every 10 seconds. Do people use twitter as a replacement for IRC/IM?

affect/effect? (1)

Hapless Hero (786287) | about 4 years ago | (#32830682)

"Dire affect"? Like someone's expression is really serious or something?

Re:affect/effect? (-1)

Anonymous Coward | about 4 years ago | (#32830890)

they should of putted effect LOL! lets sex

Re:affect/effect? (0)

Anonymous Coward | about 4 years ago | (#32831220)

That's dire affectation

Throttled Due to (0)

Anonymous Coward | about 4 years ago | (#32830712)

All the little birdies on Jaybird Street
Love to hear the robin go tweet tweet tweet

Hmm... (0)

Anonymous Coward | about 4 years ago | (#32830740)

I can't believe they would impose limits like this. If only there was a some mode of communication that allowed me to converse with people in real-time. Hold on, I'm going to go ask my friends in IRC... oh.... wait.

Can we please line up the next fad? Microblogging is so 2008.

on no! my news is gone.... (1)

metalmaster (1005171) | about 4 years ago | (#32830748)

I only subscribe to news services through twitter(yes RSS is probably better) and i have noticed that i dont get as many stories as i have gotten in the past. I havea google gadget that updates every 3mins, so i doubt im pushing any limits, but theyre screwing me anyways :(

Re:on no! my news is gone.... (0)

Anonymous Coward | about 4 years ago | (#32831146)

There is only one thing to say about your "post", and that is GET A LIFE.

It's time to ditch the NoSQL bullshit. (4, Insightful)

Anonymous Coward | about 4 years ago | (#32830758)

It's high time that the so-called "Web 2.0" companies ditch the NoSQL bullshit they've started to put into place. It's not bringing the scalability benefits they all claimed it would, and it's leading to data with very questionable reliability otherwise (not that their data is particularly valuable in the first place...)

A lot of these scalability problems could be solved by using a proper RDBMS on proper hardware that's designed to handle huge concurrent workloads. This level of traffic isn't new by any means. There are many POS systems around the world, from retail operations to airlines, that deal with a similar level of "traffic".

It doesn't matter if they go with a database and hardware stack from Oracle, or a DB2 and hardware stack from IBM, or even use Sybase's ASE on hardware from HP. They just need to invest in some real hardware and some real database systems that are meant for dealing with absolutely huge loads.

Ditch NoSQL databases. Ditch shitty servers. Start using real software, and start using real hardware. That's what other businesses do when they "grow up". If twitter is a viable business, it's time for them to grow up, too.

Re:It's time to ditch the NoSQL bullshit. (1)

LWATCDR (28044) | about 4 years ago | (#32831192)

okay just what Point of Sale System handles as many transactions a day as Twitter?
I doubt that even WalMart pushes every POS transaction to it's central database in realtime. Frankly it would be stupid to design the system that way. You could have a ore many stores all go down if their was a cable cut.
Odds are that WalMart has servers in each store that push data to a central server every x amount of time.
Also lets be honest Walmarts transactions are each far more valuable than twitters.

I would think that 120-180 api calls an hour would be good enough. That comes to around one every 30 seconds and that should be good enough.

Re:It's time to ditch the NoSQL bullshit. (1, Insightful)

Anonymous Coward | about 4 years ago | (#32831394)

International fast food chains, national lotteries, telecommunication service providers, and others.

Twitter should really look at how telecommunications billing is done. It's realtime, it's at a much greater volume than twitter handles, and they sure as hell don't bother with NoSQL "technologies".

Re:It's time to ditch the NoSQL bullshit. (1)

LWATCDR (28044) | about 4 years ago | (#32831830)

Fast food chains use a server in each store and then bundle the data to be transmitted. So not really.
National lotteries? maybe but I know that my state lottery also bundles data because they stop selling tickets about an hour before the drawing.
telecomms maybe but then the data value is much higher than twitters.

Re:It's time to ditch the NoSQL bullshit. (0)

Anonymous Coward | about 4 years ago | (#32832806)

ah, they cache your pin number too, right?

Re:It's time to ditch the NoSQL bullshit. (0)

Anonymous Coward | about 4 years ago | (#32832146)

Telecom companies were using nosql databases long before they ever had that name,

And if they had to be subject to the kind of ad hoc querying twitter goes through, they'd fall right over.

I've done data processing for lotteries. Queries can take hours.

Re:It's time to ditch the NoSQL bullshit. (1)

indiechild (541156) | about 4 years ago | (#32834258)

Those examples wouldn't even come close to handling the kind of traffic that Twitter does.

Re:It's time to ditch the NoSQL bullshit. (2, Interesting)

Amouth (879122) | about 4 years ago | (#32831930)

Lowes hardware does - there is a local server in the store that serves as a caching server only if the main trunk fails.

Re:It's time to ditch the NoSQL bullshit. (1)

LWATCDR (28044) | about 4 years ago | (#32832696)

Do you really think Lowes does that many transactions?

Re:It's time to ditch the NoSQL bullshit. (1)

Amouth (879122) | about 4 years ago | (#32832876)

well how about a better one..

Amazon

Re:It's time to ditch the NoSQL bullshit. (4, Informative)

Miseph (979059) | about 4 years ago | (#32831942)

Debit card processing systems require real-time access to the full network for every single transaction. PIN numbers cannot be cached locally, and must be validated before completing the transaction.

Re:It's time to ditch the NoSQL bullshit. (3, Interesting)

Knux (990961) | about 4 years ago | (#32832510)

Any telecom does way more than that.

I've worked in a big telecom with 40mi+ clients and I've seen an 8 nodes Oracle RAC responsible for the whole pre-paid client database handle far, far more transactions and queries than Twitter says it does.

Each regional server responsible for authorizing the calls has a 2 node Oracle RAC and it too handles far more transactions and queries than Twitter.

So, there you go... The excuse to use NoSQL was that it is quicker in some cases. It's not, time to move back to RDBMS.

Re:It's time to ditch the NoSQL bullshit. (2, Insightful)

LWATCDR (28044) | about 4 years ago | (#32832746)

It is all about bang for the buck. I do not think that anyone has ever said that you can not scale a SQL server to handle a Twitter like load. The question is one of cost.
I am sure that you could handle the load with DB2 on a z Machine also but at what cost?
I am actually a big fan of SQL and find NoSQL to extremely cumbersome.
But then I really do not have a need to scale that big.
I just am not yet willing to write off NoSQL yet. I know that Google has used it for some things.
But when you are talking about Twitter a key is the cost per transaction. That must be very low. And if I have to wait 30 seconds for a twitter that is a good trade off for me.

Re:It's time to ditch the NoSQL bullshit. (0)

Anonymous Coward | about 4 years ago | (#32832930)

If twitter can't bring in or borrow enough money to cover the cost of obviously-necessary infrastructure investments, then perhaps they should admit that their "business model" (whatever the fuck it may be, if they even have one) is a failure, and as a company they should go under.

Then again, they are American-based, so maybe they could whine and bitch and claim to be "too big to fail" until they get some money from the government? That's how American "free market capitalism" works these days, isn't it?

Re:It's time to ditch the NoSQL bullshit. (0)

Anonymous Coward | about 4 years ago | (#32833964)

"okay just what Point of Sale System handles as many transactions a day as Twitter?"

  Interac's single day record for transactions is 15.5 million.

"You could have a ore many stores all go down if their was a cable cut"

Whole stores do go down in the event of a network failure. But somehow Canada, where debit transactions have surpassed cash transactions since 2001, manages with such a stupid system.

Re:It's time to ditch the NoSQL bullshit. (1)

MarkRose (820682) | about 4 years ago | (#32835518)

Walmart does indeed have servers at each store. They cache a lot of stuff such as inventory, pricing, etc., and it's batch communicated on at least a daily basis.

Of course, payments must be realtime.

Re:It's time to ditch the NoSQL bullshit. (0)

Anonymous Coward | about 4 years ago | (#32831298)

>>If twitter is a viable business, it's time for them to grow up, too.

Ah key point, true reliable scalable solutions are implemented by companies with paying customers. Twitter can solve its bandwidth issue by becoming a for-fee service.

Re:It's time to ditch the NoSQL bullshit. (0)

Anonymous Coward | about 4 years ago | (#32831304)

So the solution is adding one more layer. Never heard that before, it could work....

Re:It's time to ditch the NoSQL bullshit. (0)

Anonymous Coward | about 4 years ago | (#32831388)

Of course there'd be no scalability benefits and it would lead to dead with questionable reliability. I mean what are you doing using no SQL?

Re:It's time to ditch the NoSQL bullshit. (0)

Anonymous Coward | about 4 years ago | (#32832102)

Y'know, if you want to be the next Zed Shaw, it takes quite a bit more than *just* cussing at people. You actually have to know what the fuck you're talking about.

Re:It's time to ditch the NoSQL bullshit. (2)

jo42 (227475) | about 4 years ago | (#32833830)

Start using real software, and start using real hardware. That's what other businesses do when they "grow up". If twitter is a viable business, it's time for them to grow up, too.

Well that's the problem right there. Many of these 'businesses' (Twitter, Foursquare, Gowalla, etc.) are not viable businesses. They are still bleeding their initial (or secondary, or tertiary) funding rounds dry with expenditures greatly outpacing their receivables (if any). Any hope of being around in a few years is if someone comes along and buys them out or dumps even more $$$$ into their black holes. Or get into advertising - where Google is the million tonne gorilla that you have to compete with.

Re:It's time to ditch the NoSQL bullshit. (1)

SQL Error (16383) | about 4 years ago | (#32835114)

Ahahahahaha!! Nice one.

Wait, you were serious?

Sorry, no, loads like Twitter's are absolute poison to RDBMSes. It doesn't matter how much money you pour down that rathole, it's not going to deliver. You can sustain the write load, that's not so much of an issue - less than 100 million records a day at the moment - but the read load is completely impossible.

The solution is left as an exercise for the reader.

Protocol overhead (3, Interesting)

ickleberry (864871) | about 4 years ago | (#32830770)

I wonder if it would have much of an impact if they switched from the verbose JSON/XML over HTTP formats for the API to a binary UDP-based protocol. Twitter seems well suited to such a protocol since it is so simple and the messages ar so short

Is it that they are doing too much processing on the data, wasting too much bandwidth or is their database causing trouble? Since its twitter obviously any bandwidth used is a waste, but you know what I mean

Re:Protocol overhead (0)

Anonymous Coward | about 4 years ago | (#32831516)

LMAO, XML over HTTP?

Ho-ly fuck. That is hilarious.

Google as alternative? (1)

Haffner (1349071) | about 4 years ago | (#32831002)

I don't twitter, but couldn't google be used to check twitter updates? I read a lot recently about google archiving twitter comments, so a simple search for that user's twitter page, filtered by time could allow as many data calls as one wanted.

Correct me if I am wrong.

Re:Google as alternative? (0)

Anonymous Coward | about 4 years ago | (#32831400)

Google doesn't archive tweets in [near] real time (afaik). Once it has hit Google, the tweet is stale and doesn't really solve the problem of the average Twit who needs instant updates.

this explains it (1)

EpsCylonB (307640) | about 4 years ago | (#32831138)

yep this explains why i can no longer use bitlbee or twirssi to follow twitter timeline

Re:this explains it (0)

Anonymous Coward | about 4 years ago | (#32831380)

That twirssi gives me an idea for making an ircd interface to twitter. Maybe being able to treat hashtags like channels and seeing everything everyone says in one place instead of looking at one person's half of the conversation at a time will make it all make sense?

Probably not, and now that the API will be drastically rate-limited, it won't make sense to try.

Twitter has jumped the shark (2)

Locke2005 (849178) | about 4 years ago | (#32831506)

Nobody goes on Twitter anymore -- it's too crowded! (With apologies to Yogi Berra.)

The limit is back up to 350 (1)

moreati (119629) | about 4 years ago | (#32832646)

AFAICT the limit is now back up to 350/hour, and has been for a day at least. This is in the UK, in case it's turned regional.

Oh noes! (-1, Troll)

Anonymous Coward | about 4 years ago | (#32833266)

Oh noes! How will I sustain my pathetic self without a constant stream of inane Twatter updates to read and wank over as I breathe in and out through my mouth?

Finally (0)

Anonymous Coward | about 4 years ago | (#32835216)

This has been discussed for months elsewhere http://www.dslreports.com/forum/r24106024-Hamilton-Twitter-trouble - between Twitter being notoriously problematic and developers, providers and Twitter pointing fingers at each other I'm surprised it only took this long for it to hit slashdot

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>