If God is all knowing, that inherently means we lack free will and cannot alter what is predetermined for us through god's knowledge.
My argument: If God knows everything, is a perfect being, and is infallable, we would not be able to have free will to act outside of his knowledge of what happens in the future. I posted a version of this on AskAChristian as one of three questions in my post on there, but I feel it would be a better post for this subreddit and might generate better discussion. I'm an atheist but was born and raised Christian (Catholic) so obviously my understanding of god and religion stems from that and other outside sources I have been exposed to. I believe it is a general consensus (at least among many Christians) that God is an all-knowing, infallible, perfect being, or at least something along the lines of that definition. Meaning God knows everything, including what will happen in the future, and that god cannot be wrong. But I think that creates an issue with free will as I will explain below. --------‐------ The way I see it, we have 2 options with this problem:
God knows everything, including our future actions and the consequences of those actions. This means we are not free to make any choice that God does not already know we will make. Thus, we do not have true free will to act outside of God's knowledge of what will happen and cannot form our own fate.
Us humans do indeed have the ability to make our own choices that alter the future, even if it is not in line with what God knows to be true. This means that God was wrong about what will happen and is therefore not perfect or infallable.
Edit: I removed a section about 2 common arguments I see and will add this small part: Steps of this:
God knows our future actions.
This means we are not able to act outside of god's knowledge as that would mean god was incorrect and imperfect.
Number 2 implies we must choose what is already known by god. This inherently means we do not have free will to make our own choices if the probabilities of our choices are binary (either 100% probability or a 0% probability).
‐------‐-------- This conundrum also implies god already knows who is going to hell or heaven before we are even born. To me that makes our time on Earth seem completely pointless in the situation a god actually does exist. Unless we are merely here acting out our roles that god already wrote the script for.
No gods, no kings, only NOPE - or divining the future with options flows. [Part 2: A Random Walk and Price Decoherence]
tl;dr - 1) Stock prices move continuously because different market participants end up having different ideas of the future value of a stock. 2) This difference in valuations is part of the reason we have volatility. 3) IV crush happens as a consequence of future possibilities being extinguished at a binary catalyst like earnings very rapidly, as opposed to the normal slow way. I promise I'm getting to the good parts, but I'm also writing these as a guidebook which I can use later so people never have to talk to me again. In this part I'm going to start veering a bit into the speculation territory (e.g. ideas I believe or have investigated, but aren't necessary well known) but I'm going to make sure those sections are properly marked as speculative (and you can feel free to ignore/dismiss them). Marked as [Lily's Speculation]. As some commenters have pointed out in prior posts, I do not have formal training in mathematical finance/finance (my background is computer science, discrete math, and biology), so often times I may use terms that I've invented which have analogous/existing terms (e.g. the law of surprise is actually the first law of asset pricing applied to derivatives under risk neutral measure, but I didn't know that until I read the papers later). If I mention something wrong, please do feel free to either PM me (not chat) or post a comment, and we can discuss/I can correct it! As always, buyer beware. This is the first section also where you do need to be familiar with the topics I've previously discussed, which I'll add links to shortly (my previous posts: 1) https://www.reddit.com/thecorporation/comments/jck2q6/no_gods_no_kings_only_nope_or_divining_the_future/ 2) https://www.reddit.com/thecorporation/comments/jbzzq4/why_options_trading_sucks_or_the_law_of_surprise/ --- A Random Walk Down Bankruptcy A lot of us have probably seen the term random walk, maybe in the context of A Random Walk Down Wall Street, which seems like a great book I'll add to my list of things to read once I figure out how to control my ADD. It seems obvious, then, what a random walk means - when something is moving, it basically means that the next move is random. So if my stock price is $1 and I can move in $0.01 increments, if the stock price is truly randomly walking, there should be roughly a 50% chance it moves up in the next second (to $1.01) or down (to $0.99). If you've traded for more than a hot minute, this concept should seem obvious, because especially on the intraday, it usually isn't clear why price moves the way it does (despite what chartists want to believe, and I'm sure a ton of people in the comments will tell me why fettucini lines and Batman doji tell them things). For a simple example, we can look at SPY's chart from Friday, Oct 16, 2020: https://preview.redd.it/jgg3kup9dpt51.png?width=1368&format=png&auto=webp&s=bf8e08402ccef20832c96203126b60c23277ccc2 I'm sure again 7 different people can tell me 7 different things about why the chart shape looks the way it does, or how if I delve deeply enough into it I can find out which man I'm going to marry in 2024, but to a rationalist it isn't exactly apparent at why SPY's price declined from 349 to ~348.5 at around 12:30 PM, or why it picked up until about 3 PM and then went into precipitous decline (although I do have theories why it declined EOD, but that's for another post). An extremely clever or bored reader from my previous posts could say, "Is this the price formation you mentioned in the law of surprise post?" and the answer is yes. If we relate it back to the individual buyer or seller, we can explain the concept of a stock price's random walk as such:
Most market participants have an idea of an asset's truevalue (an idealized concept of what an asset is actually worth), which they can derive using models or possibly enough brain damage. However, an asset's value at any given time is not worth one value (usually*), but a spectrum of possible values, usually representing what the asset should be worth in the future. A naive way we can represent this without delving into to much math (because let's face it, most of us fucking hate math) is: Current value of an asset = sum over all (future possible value multiplied by the likelihood of that value)
In actuality, most models aren't that simple, but it does generalize to a ton of more complicated models which you need more than 7th grade math to understand (Black-Scholes, DCF, blah blah blah). While in many cases the first term - future possible value - is well defined (Tesla is worth exactly $420.69 billion in 2021, and maybe we all can agree on that by looking at car sales and Musk tweets), where it gets more interesting is the second term - the likelihood of that value occurring. [In actuality, the price of a stock for instance is way more complicated, because a stock can be sold at any point in the future (versus in my example, just the value in 2021), and needs to account for all values of Tesla at any given point in the future.] How do we estimate the second term - the likelihood of that value occurring? For this class, it actually doesn't matter, because the key concept is this idea: even with all market participants having the same information, we do anticipate that every participant will have a slightly different view of future likelihoods. Why is that? There's many reasons. Some participants may undervalue risk (aka WSB FD/yolos) and therefore weight probabilities of gaining lots of money much more heavily than going bankrupt. Some participants may have alternative data which improves their understanding of what the future values should be, therefore letting them see opportunity. Some participants might overvalue liquidity, and just want to GTFO and thereby accept a haircut on their asset's value to quickly unload it (especially in markets with low liquidity). Some participants may just be yoloing and not even know what Fastly does before putting their account all in weekly puts (god bless you). In the end, it doesn't matter either the why, but the what: because of these diverging interpretations, over time, we can expect the price of an asset to drift from the current value even with no new information added. In most cases, the calculations that market participants use (which I will, as a Lily-ism, call the future expected payoff function, or FEPF) ends up being quite similar in aggregate, and this is why asset prices likely tend to move slightly up and down for no reason (or rather, this is one interpretation of why). At this point, I expect the 20% of you who know what I'm talking about or have a finance background to say, "Oh but blah blah efficient market hypothesis contradicts random walk blah blah blah" and you're correct, but it also legitimately doesn't matter here. In the long run, stock prices are clearly not a random walk, because a stock's value is obviously tied to the company's fundamentals (knock on wood I don't regret saying this in the 2020s). However, intraday, in the absence of new, public information, it becomes a close enough approximation. Also, some of you might wonder what happens when the future expected payoff function (FEPF) I mentioned before ends up wildly diverging for a stock between participants. This could happen because all of us try to short Nikola because it's quite obviously a joke (so our FEPF for Nikola could, let's say, be 0), while the 20 or so remaining bagholders at NikolaCorporation decide that their FEPF of Nikola is $10,000,000 a share). One of the interesting things which intuitively makes sense, is for nearly all stocks, the amount of divergence among market participants in their FEPF increases substantially as you get farther into the future. This intuitively makes sense, even if you've already quit trying to understand what I'm saying. It's quite easy to say, if at 12:51 PM SPY is worth 350.21 that likely at 12:52 PM SPY will be worth 350.10 or 350.30 in all likelihood. Obviously there are cases this doesn't hold, but more likely than not, prices tend to follow each other, and don't gap up/down hard intraday. However, what if I asked you - given SPY is worth 350.21 at 12:51 PM today, what will it be worth in 2022? Many people will then try to half ass some DD about interest rates and Trump fleeing to Ecuador to value SPY at 150, while others will assume bull markets will continue indefinitely and SPY will obviously be 7000 by then. The truth is -- no one actually knows, because if you did, you wouldn't be reading a reddit post on this at 2 AM in your jammies. In fact, if you could somehow figure out the FEPF of all market participants at any given time, assuming no new information occurs, you should be able to roughly predict the true value of an asset infinitely far into the future (hint: this doesn't exactly hold, but again don't @ me). Now if you do have a finance background, I expect gears will have clicked for some of you, and you may see strong analogies between the FEPF divergence I mentioned, and a concept we're all at least partially familiar with - volatility. Volatility and Price Decoherence ("IV Crush") Volatility, just like the Greeks, isn't exactly a real thing. Most of us have some familiarity with implied volatility on options, mostly when we get IV crushed the first time and realize we just lost $3000 on Tesla calls. If we assume that the current price should represent the weighted likelihoods of all future prices (the random walk), volatility implies the following two things:
Volatility reflects the uncertainty of the current price
Volatility reflects the uncertainty of the future price for every point in the future where the asset has value (up to expiry for options)
[Ignore this section if you aren't pedantic] There's obviously more complex mathematics, because I'm sure some of you will argue in the comments that IV doesn't go up monotonically as option expiry date goes longer and longer into the future, and you're correct (this is because asset pricing reflects drift rate and other factors, as well as certain assets like the VIX end up having cost of carry). Volatility in options is interesting as well, because in actuality, it isn't something that can be exactly computed -- it arises as a plug between the idealized value of an option (the modeled price) and the real, market value of an option (the spot price). Additionally, because the makeup of market participants in an asset's market changes over time, and new information also comes in (thereby increasing likelihood of some possibilities and reducing it for others), volatility does not remain constant over time, either. Conceptually, volatility also is pretty easy to understand. But what about our friend, IV crush? I'm sure some of you have bought options to play events, the most common one being earnings reports, which happen quarterly for every company due to regulations. For the more savvy, you might know of expected move, which is a calculation that uses the volatility (and therefore price) increase of at-the-money options about a month out to calculate how much the options market forecasts the underlying stock price to move as a response to ER. Binary Catalyst Events and Price Decoherence Remember what I said about price formation being a gradual, continuous process? In the face of special circumstances, in particularly binary catalyst events - events where the outcome is one of two choices, good (1) or bad (0) - the gradual part gets thrown out the window. Earnings in particular is a common and notable case of a binary event, because the price will go down (assuming the company did not meet the market's expectations) or up (assuming the company exceeded the market's expectations) (it will rarely stay flat, so I'm not going to address that case). Earnings especially is interesting, because unlike other catalytic events, they're pre-scheduled (so the whole market expects them at a certain date/time) and usually have publicly released pre-estimations (guidance, analyst predictions). This separates them from other binary catalysts (e.g. FSLY dipping 30% on guidance update) because the market has ample time to anticipate the event, and participants therefore have time to speculate and hedge on the event. In most binary catalyst events, we see rapid fluctuations in price, usually called a gap up or gap down, which is caused by participants rapidly intaking new information and changing their FEPF accordingly. This is for the most part an anticipated adjustment to the FEPF based on the expectation that earnings is a Very Big Deal (TM), and is the reason why volatility and therefore option premiums increase so dramatically before earnings. What makes earnings so interesting in particular is the dramatic effect it can have on all market participants FEPF, as opposed to let's say a Trump tweet, or more people dying of coronavirus. In lots of cases, especially the FEPF of the short term (3-6 months) rapidly changes in response to updated guidance about a company, causing large portions of the future possibility spectrum to rapidly and spectacularly go to zero. In an instant, your Tesla 10/30 800Cs go from "some value" to "not worth the electrons they're printed on". [Lily's Speculation] This phenomena, I like to call price decoherence, mostly as an analogy to quantum mechanical processes which produce similar results (the collapse of a wavefunction on observation). Price decoherence occurs at a widespread but minor scale continuously, which we normally call price formation (and explains portions of the random walk derivation explained above), but hits a special limit in the face of binary catalyst events, as in an instant rapid portions of the future expected payoff function are extinguished, versus a more gradual process which occurs over time (as an option nears expiration). Price decoherence, mathematically, ends up being a more generalizable case of the phenomenon we all love to hate - IV crush. Price decoherence during earnings collapses the future expected payoff function of a ticker, leading large portions of the option chain to be effectively worthless (IV crush). It has interesting implications, especially in the case of hedged option sellers, our dear Market Makers. This is because given the expectation that they maintain delta-gamma neutral, and now many of the options they have written are now worthless and have 0 delta, what do they now have to do? They have to unwind. [/Lily's Speculation] - Lily
No gods, no kings, only NOPE - or divining the future with options flows. [Part 3: Hedge Winding, Unwinding, and the NOPE]
Hello friends! We're on the last post of this series ("A Gentle Introduction to NOPE"), where we get to use all the Big Boy Concepts (TM) we've discussed in the prior posts and put them all together. Some words before we begin:
This post will be massively theoretical, in the sense that my own speculation and inferences will be largely peppered throughout the post. Are those speculations right? I think so, or I wouldn't be posting it, but they could also be incorrect.
I will briefly touch on using the NOPE this slide, but I will make a secondary post with much more interesting data and trends I've observed. This is primarily for explaining what NOPE is and why it potentially works, and what it potentially measures.
My advice before reading this is to glance at my prior posts, and either read those fully or at least make sure you understand the tl;drs: https://www.reddit.com/thecorporation/collection/27dc72ad-4e78-44cd-a788-811cd666e32a Depending on popular demand, I will also make a last-last post called FAQ, where I'll tabulate interesting questions you guys ask me in the comments! --- So a brief recap before we begin. Market Maker ("Mr. MM"): An individual or firm who makes money off the exchange fees and bid-ask spread for an asset, while usually trying to stay neutral about the direction the asset moves. Delta-gamma hedging: The process Mr. MM uses to stay neutral when selling you shitty OTM options, by buying/selling shares (usually) of the underlying as the price moves. Law of Surprise [Lily-ism]: Effectively, the expected profit of an options trade is zero for both the seller and the buyer. Random Walk: A special case of a deeper probability probability called a martingale, which basically models stocks or similar phenomena randomly moving every step they take (for stocks, roughly every millisecond). This is one of the most popular views of how stock prices move, especially on short timescales. Future Expected Payoff Function [Lily-ism]: This is some hidden function that every market participant has about an asset, which more or less models all the possible future probabilities/values of the assets to arrive at a "fair market price". This is a more generalized case of a pricing model like Black-Scholes, or DCF. Counter-party: The opposite side of your trade (if you sell an option, they buy it; if you buy an option, they sell it). Price decoherence ]Lily-ism]: A more generalized notion of IV Crush, price decoherence happens when instead of the FEPF changing gradually over time (price formation), the FEPF rapidly changes, due usually to new information being added to the system (e.g. Vermin Supreme winning the 2020 election). --- One of the most popular gambling events for option traders to play is earnings announcements, and I do owe the concept of NOPE to hypothesizing specifically about the behavior of stock prices at earnings. Much like a black hole in quantum mechanics, most conventional theories about how price should work rapidly break down briefly before, during, and after ER, and generally experienced traders tend to shy away from playing earnings, given their similar unpredictability. Before we start: what is NOPE? NOPE is a funny backronym from Net Options Pricing Effect, which in its most basic sense, measures the impact option delta has on the underlying price, as compared to share price. When I first started investigating NOPE, I called it OPE (options pricing effect), but NOPE sounds funnier. The formula for it is dead simple, but I also have no idea how to do LaTeX on reddit, so this is the best I have: https://preview.redd.it/ais37icfkwt51.png?width=826&format=png&auto=webp&s=3feb6960f15a336fa678e945d93b399a8e59bb49 Since I've already encountered this, put delta in this case is the absolute value (50 delta) to represent a put. If you represent put delta as a negative (the conventional way), do not subtract it; add it. To keep this simple for the non-mathematically minded: the NOPE today is equal to the weighted sum (weighted by volume) of the delta of every call minus the delta of every put for all options chains extending from today to infinity. Finally, we then divide that number by the # of shares traded today in the market session (ignoring pre-market and post-market, since options cannot trade during those times). Effectively, NOPE is a rough and dirty way to approximate the impact of delta-gamma hedging as a function of share volume, with us hand-waving the following factors:
To keep calculations simple, we assume that all counter-parties are hedged. This is obviously not true, especially for idiots who believe theta ganging is safe, but holds largely true especially for highly liquid tickers, or tickers will designated market makers (e.g. any ticker in the NASDAQ, for instance).
We assume that all hedging takes place via shares. For SPY and other products tracking the S&P, for instance, market makers can actually hedge via futures or other options. This has the benefit for large positions of not moving the underlying price, but still makes up a fairly small amount of hedges compared to shares.
Winding and Unwinding
I briefly touched on this in a past post, but two properties of NOPE seem to apply well to EER-like behavior (aka any binary catalyst event):
NOPE measures sentiment - In general, the options market is seen as better informed than share traders (e.g. insiders trade via options, because of leverage + easier to mask positions). Therefore, a heavy call/put skew is usually seen as a bullish sign, while the reverse is also true.
NOPE measures system stability
I'm not going to one-sentence explain #2, because why say in one sentence what I can write 1000 words on. In short, NOPE intends to measure sensitivity of the system (the ticker) to disruption. This makes sense, when you view it in the context of delta-gamma hedging. When we assume all counter-parties are hedged, this means an absolutely massive amount of shares get sold/purchased when the underlying price moves. This is because of the following: a) Assume I, Mr. MM sell 1000 call options for NKLA 25C 10/23 and 300 put options for NKLA 15p 10/23. I'm just going to make up deltas because it's too much effort to calculate them - 30 delta call, 20 delta put. This implies Mr. MM needs the following to delta hedge: (1000 call options * 30 shares to buy for each) [to balance out writing calls) - (300 put options * 20 shares to sell for each) = 24,000net shares Mr. MM needs to acquire to balance out his deltas/be fully neutral. b) This works well when NKLA is at $20. But what about when it hits $19 (because it only can go down, just like their trucks). Thanks to gamma, now we have to recompute the deltas, because they've changed for both the calls (they went down) and for the puts (they went up). Let's say to keep it simple that now my calls are 20 delta, and my puts are 30 delta. From the 24,000 net shares, Mr. MM has to now have: (1000 call options * 20 shares to have for each) - (300 put options * 30 shares to sell for each) = 11,000 shares. Therefore, with a $1 shift in price, now to hedge and be indifferent to direction, Mr. MM has to go from 24,000 shares to 11,000 shares, meaning he has to sell 13,000 shares ASAP, or take on increased risk. Now, you might be saying, "13,000 shares seems small. How would this disrupt the system?" (This process, by the way, is called hedge unwinding) It won't, in this example. But across thousands of MMs and millions of contracts, this can - especially in highly optioned tickers - make up a substantial fraction of the net flow of shares per day. And as we know from our desk example, the buying or selling of shares directly changes the price of the stock itself. This, by the way, is why the NOPE formula takes the shape it does. Some astute readers might notice it looks similar to GEX, which is not a coincidence. GEX however replaces daily volume with open interest, and measures gamma over delta, which I did not find good statistical evidence to support, especially for earnings. So, with our example above, why does NOPE measure system stability? We can assume for argument's sake that if someone buys a share of NKLA, they're fine with moderate price swings (+- $20 since it's NKLA, obviously), and in it for the long/medium haul. And in most cases this is fine - we can own stock and not worry about minor swings in price. But market makers can't* (they can, but it exposes them to risk), because of how delta works. In fact, for most institutional market makers, they have clearly defined delta limits by end of day, and even small price changes require them to rebalance their hedges. This over the whole market adds up to a lot shares moving, just to balance out your stupid Robinhood YOLOs. While there are some tricks (dark pools, block trades) to not impact the price of the underlying, the reality is that the more options contracts there are on a ticker, the more outsized influence it will have on the ticker's price. This can technically be exactly balanced, if option put delta is equal to option call delta, but never actually ends up being the case. And unlike shares traded, the shares representing the options are more unstable, meaning they will be sold/bought in response to small price shifts. And will end up magnifying those price shifts, accordingly.
NOPE and Earnings
So we have a new shiny indicator, NOPE. What does it actually mean and do? There's much literature going back to the 1980s that options markets do have some level of predictiveness towards earnings, which makes sense intuitively. Unlike shares markets, where you can continue to hold your share even if it dips 5%, in options you get access to expanded opportunity to make riches... and losses. An options trader betting on earnings is making a risky and therefore informed bet that he or she knows the outcome, versus a share trader who might be comfortable bagholding in the worst case scenario. As I've mentioned largely in comments on my prior posts, earnings is a special case because, unlike popular misconceptions, stocks do not go up and down solely due to analyst expectations being meet, beat, or missed. In fact, stock prices move according to the consensus market expectation, which is a function of all the participants' FEPF on that ticker. This is why the price moves so dramatically - even if a stock beats, it might not beat enough to justify the high price tag (FSLY); even if a stock misses, it might have spectacular guidance or maybe the market just was assuming it would go bankrupt instead. To look at the impact of NOPE and why it may play a role in post-earnings-announcement immediate price moves, let's review the following cases:
Stock Meets/Exceeds Market Expectations (aka price goes up) - In the general case, we would anticipate post-ER market participants value the stock at a higher price, pushing it up rapidly. If there's a high absolute value of NOPE on said ticker, this should end up magnifying the positive move since:
a) If NOPE is high negative - This means a ton of put buying, which means a lot of those puts are now worthless (due to price decoherence). This means that to stay delta neutral, market makers need to close out their sold/shorted shares, buying them, and pushing the stock price up. b) If NOPE is high positive - This means a ton of call buying, which means a lot of puts are now worthless (see a) but also a lot of calls are now worth more. This means that to stay delta neutral, market makers need to close out their sold/shorted shares AND also buy more shares to cover their calls, pushing the stock price up. 2) Stock Meets/Misses Market Expectations (aka price goes down)- Inversely to what I mentioned above, this should push to the stock price down, fairly immediately. If there's a high absolute value of NOPE on said ticker, this should end up magnifying the negative move since: a) If NOPE is high negative - This means a ton of put buying, which means a lot of those puts are now worth more, and a lot of calls are now worth less/worth less (due to price decoherence). This means that to stay delta neutral, market makers need to sell/short more shares, pushing the stock price down. b) If NOPE is high positive - This means a ton of call buying, which means a lot of calls are now worthless (see a) but also a lot of puts are now worth more. This means that to stay delta neutral, market makers need to sell even more shares to keep their calls and puts neutral, pushing the stock price down. --- Based on the above two cases, it should be a bit more clear why NOPE is a measure of sensitivity to system perturbation. While we previously discussed it in the context of magnifying directional move, the truth is it also provides a directional bias to our "random" walk. This is because given a price move in the direction predicted by NOPE, we expect it to be magnified, especially in situations of price decoherence. If a stock price goes up right after an ER report drops, even based on one participant deciding to value the stock higher, this provides a runaway reaction which boosts the stock price (due to hedging factors as well as other participants' behavior) and inures it to drops.
NOPE and NOPE_MAD
I'm going to gloss over this section because this is more statistical methods than anything interesting. In general, if you have enough data, I recommend using NOPE_MAD over NOPE. While NOPE in theory represents a "real" quantity (net option delta over net share delta), NOPE_MAD (the median absolute deviation of NOPE) does not. NOPE_MAD simply answecompare the following:
How exceptional is today's NOPE versus historic baseline (30 days prior)?
How do I compare two tickers' NOPEs effectively (since some tickers, like TSLA, have a baseline positive NOPE, because Elon memes)? In the initial stages, we used just a straight numerical threshold (let's say NOPE >= 20), but that quickly broke down. NOPE_MAD aims to detect anomalies, because anomalies in general give you tendies.
I might add the formula later in Mathenese, but simply put, to find NOPE_MAD you do the following:
Calculate today's NOPE score (this can be done end of day or intraday, with the true value being EOD of course)
Calculate the end of day NOPE scores on the ticker for the previous 30 trading days
Compute the median of the previous 30 trading days' NOPEs
Find today's deviation as compared to the MAD calculated by: [(today's NOPE) - (median NOPE of last 30 days)] / (median absolute deviation of last 30 days)
This is usually reported as sigma (σ), and has a few interesting properties:
The mean of NOPE_MAD for any ticker is almost exactly 0.
[Lily's Speculation's Speculation] NOPE_MAD acts like a spring, and has a tendency to reverse direction as a function of its magnitude. No proof on this yet, but exploring it!
Using the NOPE to predict ER
So the last section was a lot of words and theory, and a lot of what I'm mentioning here is empirically derived (aka I've tested it out, versus just blabbered). In general, the following holds true:
3 sigma NOPE_MAD tends to be "the threshold": For very low NOPE_MAD magnitudes (+- 1 sigma), it's effectively just noise, and directionality prediction is low, if not non-existent. It's not exactly like 3 sigma is a play and 2.9 sigma is not a play; NOPE_MAD accuracy increases as NOPE_MAD magnitude (either positive or negative) increases.
NOPE_MAD is only useful on highly optioned tickers: In general, I introduce another parameter for sifting through "candidate" ERs to play: option volume * 100/share volume. When this ends up over let's say 0.4, NOPE_MAD provides a fairly good window into predicting earnings behavior.
NOPE_MAD only predicts during the after-market/pre-market session: I also have no idea if this is true, but my hunch is that next day behavior is mostly random and driven by market movement versus earnings behavior. NOPE_MAD for now only predicts direction of price movements right between the release of the ER report (AH or PM) and the ending of that market session. This is why in general I recommend playing shares, not options for ER (since you can sell during the AH/PM).
NOPE_MAD only predicts direction of price movement: This isn't exactly true, but it's all I feel comfortable stating given the data I have. On observation of ~2700 data points of ER-ticker events since Mar 2019 (SPY 500), I only so far feel comfortable predicting whether stock price goes up (>0 percent difference) or down (<0 price difference). This is +1 for why I usually play with shares.
Some statistics: #0) As a baseline/null hypothesis, after ER on the SPY500 since Mar 2019, 50-51% price movements in the AH/PM are positive (>0) and ~46-47% are negative (<0). #1) For NOPE_MAD >= +3 sigma, roughly 68% of price movements are positive after earnings. #2) For NOPE_MAD <= -3 sigma, roughly 29% of price movements are positive after earnings. #3) When using a logistic model of only data including NOPE_MAD >= +3 sigma or NOPE_MAD <= -3 sigma, and option/share vol >= 0.4 (around 25% of all ERs observed), I was able to achieve 78% predictive accuracy on direction.
Like all models, NOPE is wrong, but perhaps useful. It's also fairly new (I started working on it around early August 2020), and in fact, my initial hypothesis was exactly incorrect (I thought the opposite would happen, actually). Similarly, as commenters have pointed out, the timeline of data I'm using is fairly compressed (since Mar 2019), and trends and models do change. In fact, I've noticed significantly lower accuracy since the coronavirus recession (when I measured it in early September), but I attribute this mostly to a smaller date range, more market volatility, and honestly, dumber option traders (~65% accuracy versus nearly 80%). My advice so far if you do play ER with the NOPE method is to use it as following:
Buy/short shares approximately right when the market closes before ER. Ideally even buying it right before the earnings report drops in the AH session is not a bad idea if you can.
Sell/buy to close said shares at the first sign of major weakness (e.g. if the NOPE predicted outcome is incorrect).
Sell/buy to close shares even if it is correct ideally before conference call, or by the end of the after-market/pre-market session.
Only play tickers with high NOPE as well as high option/share vol.
--- In my next post, which may be in a few days, I'll talk about potential use cases for SPY and intraday trends, but I wanted to make sure this wasn't like 7000 words by itself. Cheers. - Lily
I understand the subjunctive now; it's easy and I'll tell you about it. The problem is that we have been taught to think of it almost literally backwards
For the last couple of weeks I've been studying the uses of the subjunctive intensively for a couple of weeks, wading through inane comments like "it's just something only natives know for sure; to everyone else it is invisible magic" (a very unfortunately common and reductive opinion I've seen around), scouring forums to study how people use it, reading guides and books and the whole thing. Not much seemed to be working; the whole thing still seemed arbitrary, impossible to predict, and totally random. But last night I had a breakthrough, and damn near everything fell into place. I read a few things; I read a great write up on how in spirit English uses the "idea" of the subjunctive mood in various auxiliary verbs from "I think..." to "could", etc, and how it often parallels things in Spanish. I then read through a whole chain of spanishdict questions and answers on the topic, and someone made a comment talking to a new learner who used the indicative in the context of questioning whether or not she had popped a ball when she should have used the subjunctive. His comment was "if you're uncertain that the ball was popped...then why is your next sentence TELLING ME that the ball was popped?" I frowned. "Why would that say that?" I asked myself. "It wasn't as if she was literally telling him that the ball was popped from her perspective, instead she was just...she was...just..." Finally it all hit me; the subjunctive mood makes NO SENSE... on its own. The fatal flaw -- one that comes down to a critical misunderstanding between how english's and Spanish's primary moods operate -- isn't that I didn't understand the subjunctive per se It was that I had no real idea what the hell the indicative mood really was, nor how well-learned Spanish people interpret it to their ears. How could I? Nobody had explained it well to me. EVERYONE only talks about the subjunctive but never the indicative. Because it's obvious, right? It's the same in English as it is in Spanish, right? Let's barely even touch on it when discussing the subjunctive, how could they be related? It's like English, the subjunctive is just a weird thing dangling from under it. No it's not. Guys, the indicative mood is far stronger than english's. It isn't about an implication of rules, it isn't about an implication of concrete reality. It is ABOUT reality, truths, and our perceptions of that truth, directly. There is no implication, it's the outright direct meaning. ANY time we use the indicative, we communicate that what we are saying is a FACT to our perception, and everyone else will hear us like that's what we're saying even if the sentence would imply otherwise; it doesn't act subordinate to the sentence context, so if you use it in the wrong context they crash into each other. This is the key. The subjunctive isn't about being emotional, it isn't about possibilities, it isn't about hypotheticals per se. It is about NOT being the indicative, and literally the only real "rule" to the subjunctive is "do I want to give my sentences a strong meaning of truth, factual declaration, and concrete reality here? Or would that focus hurt my sentence?" The subjunctive is a mood of avoidance. It's used to AVOID the implications of the indicative. It can only properly be understood by contrast to spanish's usage of the indicative mood, and once you properly grasp that it's the easiest thing in the world to see. I was up till 5 am checking all the examples I could find, reading Spanish forums online and all their usages. I couldn't find one that didn't make sense to me anymore, not a single damn one. All those weird irregularities? Sensible. The reasons for why it seems to spread over so many theoretical topics? Sensible. I had done it, I'd cracked the meaning behind things. And regardless of what some people on forums had claimed, it's absolutely not magic or something you can only get by speaking for 10+ years. What it is is simply misexplained by means of being talked about in a vacuum. And with it came a LOT of sudden, cascading realizations about Spanish and how it truly differs from English. I will explain all of this below to help my fellow English-to-spanish people. First off: "Nice catch", you probably think, "but it's so abstract. How does realizing the implications of the indicative mood help me?" Allow me to demonstrate. First off, it must be said that the indicative is way simpler than the subjunctive in terms of its scope. In fact the entire reason why the subjunctive seems to cover so many things in theory is just that it is nothing more and nothing less than a reflection of everything the indicative is not, so covers more topics on paper (even if it's used far less frequently in reality) Like I said, literally all of it makes sense when you just flip things around and denote subjunctive as "that thing that gets called when the STRONG indicative mood would ruin things with its overwhelming presence." We can interrogate sentences in Spanish, piece by piece, and I'll show you exactly how it lines up. Let's take the classical example. "Estoy feliz que estés aqui!" Let's NOT ask ourselves why the subjunctive is here...instead, let's ask ourselves why the indicative ISN'T here. Indicative comes off as -- again, not merely implies like equivalent statements in English but outright states a purpose of -- declaring facts, discussing concrete reality of the here and now, our declarative plans, etc. Can you see why this would be "wrong"? It is in fact not wrong, at all -- it's just incongruous in context. If you said "estas aqui" with the mood that declares facts...well, at best you just announced to them that they are here, carrying the meaning of you wanting to let them know this fact. Rather bizarre under most circumstances, seeing as how they damn well know that they are already here and don't need us to declare this for them! So, normally we won't do it unless in context we really do want to declare this to them for some reason. "Es posible que él es aqui"Let's apply the same process! "Why wouldn't the indicative be good here?" Well let's look at the first half. "Es posible que..." alright so what we say next is possible! And next we say "es aquí". As in, we used the declarative, factual, concrete reality mood. In other words if we used "es" here, we would be outright saying that we firmly believe -- that it is a fact to us -- that this man is here Well if we firmly believed that already, then why the hell did we lead with "it's possible?" This is just an incongruous statement! Therefore, we don't use it like this (unless we really do want to make such an incongruous statement). Talking about an object that doesn't or may exist? Well when you refer to it with the indicative, you DIRECTLY STATE that to you that object might as well be reality. Poetic, but also rather delusional. Therefore, we don't use the indicative. Making conditional plans, like "we will go to the mall once grandma arrives?" If you use the indicative on the latter half, you directly state that her arriving is your reality...even if she hasn't arrived yet. See what I said above about being delusional. And so on and so on. Whenever you are confused about why a subjunctive is used, the proper question is not "why is it here". The proper question is "why is the indicative not here?" It's a subtle difference -- but an important one. What we have to understand is that Spanish is not a neutral-statement language. It is binary. You ARE asserting reality. Or you are not; those are the only options, and to speak in the indicative is to presume to be asserting your interpretation of facts for others to hear. It is not a subtle effect or theory, this is how the spanish-trained brain will unconsciously view your sentences and why it will tell them that 'something' is off about what you said. You indicated to them that you wish to discuss something factual that is in fact not, and their / our-future-brains aren't really sure how to interpret that. In fact, if you aren't already thinking of the indicative in this manner or interpreting sentences with that subtext, it's time to start; that's how the spanish speaking mind will interpret its usage, and if we want to learn this language well we need to interpret it as that as well. Those are the examples off the top of my head. I will now explain why, in terms of the structure of english and spanish, this idea is so hard to get across to native english speakers. This entire effect is a direct contrast to English, which is why it's alien to use until properly explain, and why to native Spanish speakers our confusion is foreign. To both of you -- english-to-spanish students and people who speak spanish first, i will note the following lingual truth that most people don't realize by virtue of not thinking about it: english is a flexible, and often neutral language. Let me repeat that; english is NEUTRAL by default. We DON'T communicate this kind of meaning with our basic sentences, ever. English is like a buffet rather than a binary. Its base forms are almost always implicationless by design specifically so that we can choose to insert auxiliary words to enchant it with such meanings as English speakers please. This is likely also why most of its true subjunctive mood has faded into niche forms; English genuinely has no real need of it with so many ways of putting a sentence together. Spanish, by and large, has 2, and you will not escape from them nor their implications. (Well and imperative, but I'm not talking about it because both of our languages share that one nearly identically in concept). A statement is a statement, indicative is your reality and your attempts to declare facts for others to hear and discuss, and subjunctive is the only way to indicate that what you speak of isn't that. That's it. That's all there is to it. Also, I'll tell all spanish-first readers who happen to read this the same thing i told my Spanish friends irl: you have no idea how confusing the subjunctive is when you are coming from a place where the "primary" prose can imply anything due to a) that being what we think of thanks to English b) most people not going out of their way to firmly correct this misconception. It would be damn near useless and indeed extremely random to perceive in usage if not for its reflections on the indicative, which is different from what we think at first. THIS is why your English speaking friends who are trying to learn Spanish struggle so hard with the concept, while you just know it. (And on the flip, why none of my Spanish-first friends realized the neutrality underlining English until i directly pointed it out to them. A lot of us aren't aware of the underlying mechanics; this is fine going from Spanish to English since English is flexible as hell, but not so much the other way around, unfortunately for us.) Now, after all of this, can I make a request to the general community: can we PLEASE not presume that the subjunctive is magic and that the indicative is so obvious? That kind of common notion is at least in part why a lot of English-to-spanish students wrestle with the concept. For some reason we're often taught (I sure was) that the indicative in Spanish is synonymous with English and to not think more on it in comparison to its bizarre cousin, when in reality the differences between English normal prose and Spanish's indicative are both easy enough to explain and also EXACTLY why the subjunctive exists. Trying to explain how and where to use the subjunctive is like trying to put a car together with a wrench and a few bolts; good luck figuring it out easily with so much essential context missing. Maybe my teachers just didn't think about it? Do people in general not just realize this crucial difference between English's loose neutral structure and Spanish's much stricter and meaning-laden structure? Who knows. And no, realizing this doesn't mean we don't have to practice. I'll forget use cases, not be able to realize when I needed to switch moods until hindsight, etc. I recommend "demystifying the subjunctive" for a book, it helped me out immensely. But at least now we understand it. Learning, as Spanishdude on YouTube says, is just an act of giving context to things we already know, and now we can do that without being lost. It IS a simple and easy to grasp concept at its heart, it's just not usually explained well and requires explaining what precisely is the difference between how english approaches delivering information and how Spanish does. Former is neutral, latter always communicates a meaning. The indicative in particular always imparts a sense of speaking of concrete reality no matter what sentence it happened to be in, and the subjunctive is nothing more than its replacement for when the indicative's strong statements on reality simply don't work with the matters being discussed. Of the two, the indicative is both more strict and also more narrow, and thus the clause of 'use indicative until its determinate attitude of only being used to address factual reality shits the sentence up' reigns best for the quickest and easiest way to conceptually grasp the subjunctive. It is all about the indicative; always has been. Anyway that's all I got. I'm finally going to bed, work will suck tomorrow but oh well, I'm too happy to care. After that I'll...maybe finally learn some decent vocabulary. I'm a heavily grammar based learner, so this was actually one of the first stops on my way through my new language, so I've still got a lot of learning to do. Still, now that i get this, I am much more confident of the rest of the way onward. ---------- Couple of more fun tidbits, if anyone is still reading. I also realized the English conditional is WAY wider than Spanish's, and that this is in part because in English it has come close to replacing separated subjunctive grammar in a lot of cases. If you ever notice how often we through "can" and "could" around, it is in part because of this. I also realized that in a theoretical sense, the "true" purpose of future tense in Spanish is to discuss plans for the future, not to indicate that it will happen. Technically a small detail and probably obvious to most, but for some reason I needed this realization to realize why the subjunctive isn't triggered by its speculation; merely declaring plans is a concrete thing, after all. For some reason in English I get the subtle sense of trying to will over the future when I use it. Might be a slight language difference in intent, or maybe I'm just presumptuous about the future in English. Finally, just a piece of trivia I liked; I realized "to think" and "creer / pensar" aren't really good translations for each other in implication. You ever wondered why it doesn't trigger the subjunctive in a positive usage? This helps to reinforce one last bit; for such things when it comes to certainty vs uncertainty, it's likely just a concept being used slightly differently in Spanish. While they mean literally the same thing, their connotations are nearly inverted. Spanish uses it to affirm that you believe something to be true (hence why it's also translated as to believe), while English uses it to instead imply subjectivity and impart doubt to a clause. It would absolutely be a subjunctive trigger in Spanish if it were transplanted directly since our usage of it in spirit is completely synonymous with Spanish's own usual triggers, but well it isn't. My Spanish speaking colleagues thought that one was interesting in particular for some reason, maybe they didn't really know how i had meant it this whole time?
It's that time of year again, and we've got a new version of macOS on our hands! This year we've finally jumped off the 10.xx naming scheme and now going to 11! And with that, a lot has changed under the hood in macOS. As with previous years, we'll be going over what's changed in macOS and what you should be aware of as a macOS and Hackintosh enthusiast.
Has Nvidia Support finally arrived?
What has changed on the surface
A whole new iOS-like UI
Broken Kexts in Big Sur
What has changed under the hood
New Kernel cache system: KernelCollections!
New Kernel Requirements
Secure Boot Changes
No more symbols required
Broken Kexts in Big Sur
MSI Navi installer Bug Resolved
New AMD OS X Kernel Patches
Other notable Hackintosh issues
Several SMBIOS have been dropped
Extra long install process
X79 and X99 Boot issues
New RTC requirements
Legacy GPU Patches currently unavailable
What’s new in the Hackintosh scene?
Dortania: a new organization has appeared
Dortania's Build Repo
True legacy macOS Support!
Intel Wireless: More native than ever!
Clover's revival? A frankenstein of a bootloader
Death of x86 and the future of Hackintoshing
Getting ready for macOS 11, Big Sur
Has Nvidia Support finally arrived?
Sadly every year I have to answer the obligatory question, no there is no new Nvidia support. Currently Nvidia's Kepler line is the only natively supported gen. However macOS 11 makes some interesting changes to the boot process, specifically moving GPU drivers into stage 2 of booting. Why this is relevant is due to Apple's initial reason for killing off Web Drivers: Secure boot. What I mean is that secure boot cannot work with Nvidia's Web Drivers due to how early Nvidia's drivers have to initialize at, and thus Apple refused to sign the binaries. With Big Sur, there could be 3rd party GPUs however the chances are still super slim but slightly higher than with 10.14 and 10.15.
What has changed on the surface
A whole new iOS-like UI
Love it or hate it, we've got a new UI more reminiscent of iOS 14 with hints of skeuomorphism(A somewhat subtle call back to previous mac UIs which have neat details in the icons) You can check out Apple's site to get a better idea:
A feature initially baked into APFS back in 2017 with the release of macOS 10.13, High Sierra, now macOS's main System volume has become both read-only and snapshotted. What this means is:
3rd parties have a much more difficult time modifying the system volume, allowing for greater security
OS updates can now be installed while you're using the OS, similar to how iOS handles updates
Time Machine can now more easily perform backups, without file inconsistencies with HFS Plus while you were using the machines
However there are a few things to note with this new enforcement of snapshotting:
OS snapshots are not calculated as used space, instead being labeled as purgeable space
Disabling macOS snapshots for the root volume with break software updates, and can corrupt data if one is applied
What has changed under the hood
Quite a few things actually! Both in good and bad ways unfortunately.
New Kernel Cache system: KernelCollections!
So for the past 15 years, macOS has been using the Prelinked Kernel as a form of Kernel and Kext caching. And with macOS Big Sur's new Read-only, snapshot based system volume, a new version of caching has be developed: KernelCollections! How this differs to previous OSes:
Kexts can no longer be hot-loaded, instead requiring a reboot to load with kmutil
OS Snapshots are now verified on each boot to ensure no system volume modifications occurred
apfs.kext and AppleImage4.kext verify the integrity of these snapshots
While technically these security features are optional and can be disabled after installation, many features including OS updates will no longer work reliably once disabled. This is due to the heavy reliance of snapshots for OS updates, as mentioned above and so we highly encourage all users to ensure at minimum SecureBootModel is set to Default or higher.
Note: ApECID is not required for functionality, and can be skipped if so desired.
Note 2: OpenCore 0.6.3 or newer is required for Secure Boot in Big Sur.
No more symbols required
This point is the most important part, as this is what we use for kext injection in OpenCore. Currently Apple has left symbols in place seemingly for debugging purposes however this is a bit worrying as Apple could outright remove symbols in later versions of macOS. But for Big Sur's cycle, we'll be good on that end however we'll be keeping an eye on future releases of macOS.
New Kernel Requirements
With this update, the AvoidRuntimeDefrag Booter quirk in OpenCore broke. Because of this, the macOS kernel will fall flat when trying to boot. Reason for this is due to cpu_count_enabled_logical_processors requiring the MADT (APIC) table, and so OpenCore will now ensure this table is made accessible to the kernel. Users will however need a build of OpenCore 0.6.0 with commit bb12f5for newer to resolve this issue. Additionally, both Kernel Allocation requirements and Secure Boot have also broken with Big Sur due to the new caching system discussed above. Thankfully these have also been resolved in OpenCore 0.6.3. To check your OpenCore version, run the following in terminal: nvram 4D1FDA02-38C7-4A6A-9CC6-4BCCA8B30102:opencore-version If you're not up-to-date and running OpenCore 0.6.3+, see here on how to upgrade OpenCore: Updating OpenCore, Kexts and macOS
Broken Kexts in Big Sur
Unfortunately with the aforementioned KernelCollections, some kexts have unfortunately broken or have been hindered in some way. The main kexts that currently have issues are anything relying on Lilu's userspace patching functionality:
Big Sur dropped a few Ivy Bridge and Haswell based SMBIOS from macOS, so see below that yours wasn't dropped:
iMac14,3 and older
Note iMac14,4 is still supported
MacPro5,1 and older
MacMini6,x and older
MacBook7,1 and older
MacBookAir5,x and older
MacBookPro10,x and older
If your SMBIOS was supported in Catalina and isn't included above, you're good to go! We also have a more in-depth page here: Choosing the right SMBIOS For those wanting a simple translation for their Ivy and Haswell Machines:
iMac13,1 should transition over to using iMac14,4
iMac13,2 should transition over to using iMac15,1
iMac14,2 and iMac14,3 should transition over to using iMac15,1
Note: AMD CPUs users should transition over to MacPro7,1
iMac14,1 should transition over to iMac14,4
Currently only certain hardware has been officially dropped:
"Official" Consumer Ivy Bridge Support(U, H and S series)
These CPUs will still boot without much issue, but note that no Macs are supported with consumer Ivy Bridge in Big Sur.
Ivy Bridge-E CPUs are still supported thanks to being in MacPro6,1
Ivy Bridge iGPUs slated for removal
HD 4000 and HD 2500, however currently these drivers are still present in 11.0.1
Similar to Mojave and Nvidia's Tesla drivers, we expect Apple to forget about them and only remove them in the next major OS update next year
Due to the new snapshot-based OS, installation now takes some extra time with sealing. If you get stuck at Forcing CS_RUNTIME for entitlement, do not shutdown. This will corrupt your install and break the sealing process, so please be patient.
X79 and X99 Boot issues
With Big Sur, IOPCIFamily went through a decent rewriting causing many X79 and X99 boards to fail to boot as well as panic on IOPCIFamily. To resolve this issue, you'll need to disable the unused uncore bridge:
With macOS Big Sur, AppleRTC has become much more picky on making sure your OEM correctly mapped the RTC regions in your ACPI tables. This is mainly relevant on Intel's HEDT series boards, I documented how to patch said RTC regions in OpenCorePkg:
For those having boot issues on X99 and X299, this section is super important; you'll likely get stuck at PCI Configuration Begin. You can also find prebuilts here for those who do not wish to compile the file themselves:
For some reason, Apple removed the AppleIntelPchSeriesAHCI class from AppleAHCIPort.kext. Due to the outright removal of the class, trying to spoof to another ID (generally done by SATA-unsupported.kext) can fail for many and create instability for others. * A partial fix is to block Big Sur's AppleAHCIPort.kext and inject Catalina's version with any conflicting symbols being patched. You can find a sample kext here: Catalina's patched AppleAHCIPort.kext * This will work in both Catalina and Big Sur so you can remove SATA-unsupported if you want. However we recommend setting the MinKernel value to 20.0.0 to avoid any potential issues.
Legacy GPU Patches currently unavailable
Due to major changes in many frameworks around GPUs, those using ASentientBot's legacy GPU patches are currently out of luck. We either recommend users with these older GPUs stay on Catalina until further developments arise or buy an officially supported GPU
What’s new in the Hackintosh scene?
Dortania: a new organization has appeared
As many of you have probably noticed, a new organization focusing on documenting the hackintoshing process has appeared. Originally under my alias, Khronokernel, I started to transition my guides over to this new family as a way to concentrate the vast amount of information around Hackintoshes to both ease users and give a single trusted source for information. We work quite closely with the community and developers to ensure information's correct, up-to-date and of the best standards. While not perfect in every way, we hope to be the go-to resource for reliable Hackintosh information. And for the times our information is either outdated, missing context or generally needs improving, we have our bug tracker to allow the community to more easily bring attention to issues and speak directly with the authors:
Kexts here are built right after commit, and currently supports most of Acidanthera's kexts and some 3rd party devs as well. If you'd like to add support for more kexts, feel free to PR: Build Repo source
True legacy macOS Support!
As of OpenCore's latest versioning, 0.6.2, you can now boot every version of x86-based builds of OS X/macOS! A huge achievement on @Goldfish64's part, we now support every major version of kernel cache both 32 and 64-bit wise. This means machines like Yonah and newer should work great with OpenCore and you can even relive the old days of OS X like OS X 10.4! And Dortania guides have been updated accordingly to accommodate for builds of those eras, we hope you get as much enjoyment going back as we did working on this project!
Intel Wireless: More native than ever!
Another amazing step forward in the Hackintosh community, near-native Intel Wifi support! Thanks to the endless work on many contributors of the OpenIntelWireless project, we can now use Apple's built-in IO80211 framework to have near identical support to those of Broadcom wireless cards including features like network access in recovery and control center support. For more info on the developments, please see the itlwm project on GitHub: itlwm
Note, native support requires the AirportItlwm.kext and SecureBootModel enabled on OpenCore. Alternatively you can force IO80211Family.kext to ensure AirportItlwm works correctly.
Airdrop support currently is also not implemented, however is actively being worked on.
Clover's revival? A frankestien of a bootloader
As many in the community have seen, a new bootloader popped up back in April of 2019 called OpenCore. This bootloader was made by the same people behind projects such as Lilu, WhateverGreen, AppleALC and many other extremely important utilities for both the Mac and Hackintosh community. OpenCore's design had been properly thought out with security auditing and proper road mapping laid down, it was clear that this was to be the next stage of hackintoshing for the years we have left with x86. And now lets bring this back to the old crowd favorite, Clover. Clover has been having a rough time of recent both with the community and stability wise, with many devs jumping ship to OpenCore and Clover's stability breaking more and more with C++ rewrites, it was clear Clover was on its last legs. Interestingly enough, the community didn't want Clover to die, similarly to how Chameleon lived on through Enoch. And thus, we now have the Clover OpenCore integration project(Now merged into Master with r5123+). The goal is to combine OpenCore into Clover allowing the project to live a bit longer, as Clover's current state can no longer boot macOS Big Sur or older versions of OS X such as 10.6. As of writing, this project seems to be a bit confusing as there seems to be little reason to actually support Clover. Many of Clover's properties have feature-parity in OpenCore and trying to combine both C++ and C ruins many of the features and benefits either languages provide. The main feature OpenCore does not support is macOS-only ACPI injection, however the reasoning is covered here: Does OpenCore always inject SMBIOS and ACPI data into other OSes?
Death of x86 and the future of Hackintoshing
With macOS Big Sur, a big turning point is about to happen with Apple and their Macs. As we know it, Apple will be shifting to in-house designed Apple Silicon Macs(Really just ARM) and thus x86 machines will slowly be phased out of their lineup within 2 years. What does this mean for both x86 based Macs and Hackintoshing in general? Well we can expect about 5 years of proper OS support for the iMac20,x series which released earlier this year with an extra 2 years of security updates. After this, Apple will most likely stop shipping x86 builds of macOS and hackintoshing as we know it will have passed away. For those still in denial and hope something like ARM Hackintoshes will arrive, please consider the following:
We have yet to see a true iPhone "Hackintosh" and thus the likely hood of an ARM Hackintosh is unlikely as well
There have been successful attempts to get the iOS kernel running in virtual machines, however much work is still to be done
Apple's use of "Apple Silicon" hints that ARM is not actually what future Macs will be running, instead we'll see highly customized chips based off ARM
For example, Apple will be heavily relying on hardware features such as WX, kernel memory protection, Pointer Auth, etc for security and thus both macOS and Applications will be dependant on it. This means hackintoshing on bare-metal(without a VM) will become extremely difficult without copious amounts of work
Also keep in mind Apple Silicon will no longer be UEFI-based like Intel Macs currently are, meaning a huge amount of work would also be required on this end as well
So while we may be heart broken the journey is coming to a stop in the somewhat near future, hackintoshing will still be a time piece in Apple's history. So enjoy it now while we still can, and we here at Dortania will still continue supporting the community with our guides till the very end!
Getting ready for macOS 11, Big Sur
This will be your short run down if you skipped the above:
Lilu's userspace patcher is broken
Due to this many kexts will break:
WhateverGreen's DRM and -cdfon patches
Many Ivy Bridge and Haswell SMBIOS were dropped
See above for what SMBIOS to choose
Ivy Bridge iGPUs are to be dropped
Currently in 11.0.1, these drivers are still present
For the last 2, see here on how to update: Updating OpenCore, Kexts and macOS In regards to downloading Big Sur, currently gibMacOS in macOS or Apple's own software updater are the most reliable methods for grabbing the installer. Windows and Linux support is still unknown so please stand by as we continue to look into this situation, macrecovery.py may be more reliable if you require the recovery package. And as with every year, the first few weeks to months of a new OS release are painful in the community. We highly advise users to stay away from Big Sur for first time installers. The reason is that we cannot determine whether issues are Apple related or with your specific machine, so it's best to install and debug a machine on a known working OS before testing out the new and shiny. For more in-depth troubleshooting with Big Sur, see here: OpenCore and macOS 11: Big Sur
WotV PvP Tactics & Mentality - Six Months of Mediena Bombs & how the Meta Evolves in PvP
*** Registration is Now Open until 9/15 for the latest Live PvP tournament, organized by u/LongTimeGaming. There is no entry fee and everyone is guaranteed at least five rounds of combat! Please PM him or myself for details on how to register! *** For today's entry I want to discuss one of the stronger live PvP strategies currently floating around the meta, particularly at the higher player ranks or with whales smurfing at lower ranks: The Medi (Mediena) Bomb.
The Medi bomb, as it currently exists, comprises three major components: a Shukuchi Mediena, Agility, and as much Magic/Magic Attack equipment and VCs as can be mustered together. If you take away any of these components the win probability of the strategy plummets. Let's examine each in turn:
Shukuchi (and forward deployment) - Without Shukuchi and pushing Medi as far forward as you can you leave a lot of squares open where the enemy team can be hiding. No highly competitive team in live PvP is sticking to the default 'three across' formation!
Agility - Medi's main weakness is her fragility. If Medi doesn't go first there's the chance that the opponent will have a high speed unit in place to OHKO Medi (i.e., Frederika) or will at least be able to reposition their faster units out of Medi's threat range. Shadow Runner on a Medi is a must of course, but VCs that yield AGI are crucial as well.
Magic Attack - Being able to OHKO at least one enemy unit is essential - a unit with 1 HP is as dangerous as a unit with 4000 HP. A Platinum Rod +5, and a high level Trousseau are the most important elements. Mag from Ramuh is useful. You won't have Medi's own mag passive available due to needing Speed + Movement.
Responses to the Medi Bomb - Sample Formations
So here are some sample formations I whipped up - note the screenshots are composites from my main account so levels and etc. are not going to be optimal. There are many variations possible, these are just starting points! Option 1 - Go Faster than Medi Gunner Girls that are faster than a Medi Bomb are close to a guaranteed win! The hardest counter to Medi bomb is a Fred that's faster than her - A properly kitted Fred can OHKO a Medi with sharpshoot off the bat without any external buffs. Because Medi + AGI VC is always faster then a Fred without, you will need to put AGI cards of your own on top of your Fred to ensure you go before the enemy Medi does. Option 2 - Be able to survive the Plume This is the second picture here - the first one had an impossible setup for Rain due to my photoshopping (I don't own Rain). There are of course characters that can survive a plume, even a plume from a super whaled out Medi. Rain is one of these - he's a magic tank off the bat and he has elemental advantage to Medi. Ayaka can stand your viktora back up post plume while Rain can OHKO the Medi. Option 3 - Be able to Evade the Plume Miranda is a super useful utility placeholder for these formations Vinera is pretty popular in live pvp these days due to her combination of speed, power, and high evade. She is essentially unhittable to teams that haven't geared with guaranteed hit options or stacked as much ACC as they can. I don't have a leveled up Vinera to test the above combination for exact hit percentages but an unbuffed Vinera should have about 150-200 evade depending on cards. Not exactly easy to hit with a plume!
A Brief History of the Medi Bomb
Finally, the changes in popularity to the Medi Bomb through time exemplifies the essential elements that define the 'meta' in live PvP: the outcome to nearly every match is a binary win or loss. Any team that is slightly better than another will win almost all the time holding all over elements equal. Medi's evolution in live pvp shows this perfectly. In the first weeks of WotV, Medi bombs were one of the most common formations available. Medi was I think the fastest launch character and once she hit level 40 she could OHKO most other units common at the time that were also at level 40 (Mont, Sterne, etc.). This meant that Medi could just run up and drop the other team. Lots and lots of Medi v Medi engagements were happening and Medi's dominance in PvE and PvP was often remarked on by jokes or commentary here and elsewhere. Flash forward to the FFT event or so and Medi bombs disappeared from competitive play. Two things happened: Frederika and increasing toughness of teams. Fred, as I said before, is a hard counter to Medi as long as she goes first and at this stage in the game almost no one had high AGI VCs so Freds melted Medis off the live PvP scene. A secondary effect was that everyone was getting to level 99 on their mains and starting to amass TMRs - Medi stopped being able to OHKO units. Medi Bombs stayed quiet for the months following FFT until Platinum Rod came out, then whales with +15% AGI off the House Beolve card could go first in most engagements and do enough damage to OHKO broad swaths of units. Now with FFT2 almost done plenty of dolphins or even minnows/f2p willing to proc 5-6 whimseys per day can have their 15% AGI cards so Medi bombs are more accessible. Essentially, the meta is an unstable equilibrium defined by a complex set of inputs and that binary output of winning/losing. As soon as Medi is one AGI slower then her hard counter *and that hard counter is common* she becomes worthless. Same with the tipping point between OHKO and survival. Right now medi bombs are fast and powerful - therefore viable against many team comps. Well that wraps up today's post - this one turned out to be longer than I was expecting. Let me know if its too long in the comments and if you have any suggestions for future topics that you'd like a detailed breakdown on please let me know! ------------------------------------------------ WotV PvP Tactics & Mentality is an irregularly updated series of posts about the most neglected aspect of WotV: Live PvP If you liked this post, feel read to my previous entries in the series:
Bluehole - Let's talk Wellbia/XINGCOD3 user privacy risks for the sake of transparency
For those who don't know.. XINGCODE-3 is a kernel (ring0) privillege process under xhunter1.sys owned by the Korean company Wellbia (www.wellbia.com). Unlike what people say, Wellbia isn't owned or affiliated with Tencent, however, XINGCOD3 is custom designed contractor for each individual game - mainly operating in the APAC region, many of them owned by Tencent. XINGCODE-3 is outsourced to companies as a product modified to the specific characteristics of the game. The process runs on the highest privilegied level of the OS upon boot and is infamous for being an essential rootkit - on a malware level, it has the highest vulnerability to be abused should Wellbia or any of the 3rd Party Companies be target of an attack. It has been heavily dissected by the hacking community as being highly intrusive and reversed engineered (although nowadays still easily bypassable by a skilled and engaged modder by created a custom Win Framework). While most is true for a standard anti-cheating, users should be aware that XINGCOD3 able to scan the entire user memory cache, calls for DLL's, including physical state API's such as GetAsyncKeyState where it scans for the physical state of hardware peripherals, essentially becoming a hardware keylogger. Studying the long history of reverse engineering of this software has shown that Wellbia heavily collects user data for internal processing in order to create whitelists of processes and strings analyzed by evaluating PE binaries - having full access to your OS it also is known to scan and having access to user file directories and collecting and storing paths of modified files under 48 hours for the sake of detecting possible sources of bypassing. All this data is ultimately collected by Wellbia to their host severs - also via API calls to Korean servers in order to run services such as whitelists, improve algorithm accuracy and run comparative statistics and analysis based on binaries, strings and common flags. Usually this is a high risk for any service, including BattleEye, EasyAntiCheat, etc. but what's worrying in Wellbia, thus. Bluehole's are actually a couple of points: (not to mention you can literally just deny the service from installing, which by itself is already a hilarious facepalm situation and nowhere does the TSL call for an API of the service)
Starting off, Wellbia is a rather small development company with having only one product available on the market for rather small companies, the majority hold by Chinese government and countries where the data handling, human rights and user privacy is heavily disregarded. This makes my tinfoil hat think that the studio's network security isn't as fortified as a Sony which had abused rootkits, just due to budget investment alone. Their website is absolutely atrocious and amateur - and for an international company that deals with international stakeholders and clients it's impressive the amount of poor english, errors and ambiguous information a company has in their presentation website - there's instances where the product name is not even correctly placed in their own EULA - if a company cannot invest even in basic PR and presentation something leaves me a bitter taste that their network security isn't anything better. They can handle user binaries but network security is a completely different work. The fact that hackers are easily able to heartbeat their API network servers leaves me confirming this.
This the most fun one. Wellbia website and terms conditions explicitely say that they're not held accountable should anything happen - terms that you agree and are legally binded to by default by agreeing to Bluehole's terms and conditions:" Limitations of Company Responsibility
IGNCODE3 is a software provided for free to users. Users judge and determine to use services served by software developers and providers, and therefore the company does not have responsibility for results and damages which may have occurred from XIGNCODE3 installation and use.
2. Company has no responsibility attributable to user’s computer or network environment-based reasons.
3. Company has no responsibility for XIGNCODE3 and XIGNCODE3 based service errors, XIGNCODE3 and XIGNCODE3 based service prohibition from other services are attributable to user-based reasons."
(the fact that in 1. they can't even care to write properly the name of their product means how little they care about things in general - you can have a look at this whole joke of ToS's that I can probably put more effort in writting it: https://www.wellbia.com/?module=Html&action=SiteComp&sSubNo=5 - so I am sorry if I don't trust where my data goes into) 3) It kinda pisses me that Bluehole adopted this in the midst of the their product got released post-purchase. When I initially bought the product, in nowhere was written that the user operative system data was being collected by a third party company to servers located in APAC (and I'm one of those persons who heavily reads terms and conditions) - and the current ToS's still just touch this topic on the slightest and ambiguously - it does not say which data gets collected, discloses who and where it's hold - "third party" could be literally anyone - a major disrespect for your consumers. I'm kinda of pissed off as when I initially purchase the product in very very early stages of the game I didn't agree for any kernel level data collection to be held abroad without disclosure of what data is actually being collected otherwise it would have been a big No on the purchase. The fact that you change the rules of the game and the terms of conditions in the midst of the product release leaves me with two options Use to Your Terms or Don't Use a product I've already purchased now has no use - both changes ingame and these 3rd party implementations are so different from my initial purchase that I feel like it's the equivalent of purchasing a shower which in the next year is so heavily modified that it decides to be a toilet. I would really like for you Bluehole to show me the initial terms and conditions to when the game was initially released and offer me a refund once you decided to change the product and terms and conditions midway which I don't agree with but am left empty handed with no choice but to abandon the product - thus making this purchase a service which I used for X months and not a good. I really wish this topic had more visibility as I know that the majority of users are even in the dark about this whole thing and Valve and new game companies really make an effort in asserting their product's disclosures about data transparency and the limit of how much a product can change to be considered a valid product resembelance upon purchase when curating their games in the future - I literally bought a third person survival shooter and ended up with a rootkit chinese FPS. Sincerely, a pissed off customer - who unlike the majority is concerned about my data privacy and I wish you're ever held accountable for changing sensitive contract topics such as User Privacy mid-release. ----- EDIT: For completely removing it from your system should you wish: Locate the file Xhunter1.sysThis file is located in this directory: C:\Windows\xhunter1.sys Remove the Registry Entry (regedit on command prompt)The entry is located here: HKEY_LOCAL_MACHINE > SYSTEM > ControlSet001 > Services > xhunter For more information about XINGCOD3 and previous succesful abuses which show the malignant potential of the rootkit (kudos to Psychotropos): - https://x86.re/blog/xigncode3-xhunter1.sys-lpe/ - https://github.com/Psychotropos/xhunter1_privesc
Disclaimer: See the note explaining the volatility figures in the comments. There’s been a bit of talk about market volatility leading up to the election, so I thought I would provide some historic information about the 2016 election. I’m going to try to keep this apolitical and not make any predictions about the actual election results. Let’s set up 2016. The election date was November 8, 2016 and the Clinton was the heavy favourite to beat Trump. Forecast-wise things were actually extremely similar to 2020 at this point, but in the last couple weeks leading into the election Trump did make up some significant ground. There was a lot of fear around the market leading up to the election with many suggestions that should Trump win that we would see a correction of 10+% and a run on precious metals. 2016 Polling Timeline: https://i.imgur.com/eiyh0uV.png The correction arrived, but it was short-lived. S&P Futures hit circuit breakers after-hours, but recovered after Clinton’s concession speech only to open moderately up and end the day up significantly (~4%?). My SPY puts and NUGT calls did not do well. Let’s look at SPY. Post-Election $SPY ATM Call Volatility: https://i.imgur.com/C2T6RM8.png Post-Election $SPY ATM Put Volatility: https://i.imgur.com/iwXhXGV.png Looking at ATM calls and puts we see that both experienced a run-up in volatility in the weeks leading up into the election. This is to be expected and occurs with any binary or crush-inducing event as implied volatility is a time-weighted metric and pre-crush time is removed in the time leading up to the event is removed from the balance. It would be an interesting and very useful exercise to look at the forward volatility for the November 4 to November 11/18 time period in and observe how much the expected volatility changes, but the data I have available is too imprecise to provide an accurate picture of the volatility surface. If we take a quick look at puts expiring Friday before the election, we don’t see a significant change in the implied volatility. Pre-Election $SPY ATM Put Volatility: https://i.imgur.com/Knje4gK.png I’m not going to speak much to it, but we saw similar volatility changes in ATM $GLD options in the lead-up to the election. Post-Election $GLD ATM Call Volatility: https://i.imgur.com/iGcsrUQ.png Post-Election $GLD ATM Put Volatility: https://i.imgur.com/Sp8kX4A.png The two sectors that I regret not-trading the most last time around were health care and private prisons. The private prisons sector doesn’t have the liquidity to give a decent picture of the volatility, but with healthcare we can. Post-Election $XBI ATM Call Price History: https://i.imgur.com/U1UMARA.png Post-Election $XBI ATM Call Volatility: https://i.imgur.com/xUYItnv.png So how does 2020 compare? Coronavirus has thrown the market into an abnormal volatility landscape this year. Simultaneously while the current win/loss probability for Biden/Trump is very similar to what we saw in 2016, the way that we’ve got there is very different. Biden has been the significant betting favourite throughout the entire campaign and Trump’s old playbook doesn’t seem to be working as well with swing-state voters this time around. 2020 Polling Timeline: https://i.imgur.com/MoKOQcJ.png 2020 Full-Year $SPY 350C Volatility: https://i.imgur.com/mRl69UM.png Even with the currently elevated volatility, we should still expect to see the it tick up on options leading into the election as with all other crush-type events.
Dice & Card Randomizers for combat (PvP/Dungeon Crawls/etc.) - Your favorite?
I'm trying to figure out what kind of mechanic is the most engaging to players when it comes to simulating combat and/or adventure, independent of how it ties in to the rest of the game. There are many different options, and I think they all have some advantages and disadvantages, but I thought to ask you fellow designers what you have enjoyed. Examples are top of mind, let me know if you have any other prominent examples. Some examples: Beat Target Number (Multiple examples) Once your attacks and targets have been chosen, roll (usually) one die to see if you match or beat the target number. If you do, you score a hit. This method is probably the most simple, as it can be reduced down to one die. It's also very quick, and provides the randomness we're looking for. It becomes easy to determine if you hit. It also becomes easy to modify the attack power and/or target number in a number of ways, such as adding extra dice or modifying the input number or the target number by +/- a few steps. This method is used in D&D using a D20 die, but can be used in a much simpler way. A clear disadvantage is that it is extremely random - the probability distribution is quite static, and the result is binary: Fail or succeed. You could use the success rate to create a non-binary output ("by how much you beat the score"), but it becomes very swingy. Beat Target Number w. Pool (Mansion of Madness) Before (or after) the target number is known, roll a pool of dice and count the number of successes. You may optionally expend resources to reroll or to modify your results to improve your success score. Compare this score to a target number. In comparison to the previous method, it gives that player more control of the number. It also becomes easy to stepwise increase your odds of success (by adding more dice or expending resources). The result is not as binary as in the previous method. Opposed Success (Descent, Warhammer Quest, etc.) Roll a number of dice to determine success. Opponent rolls a number of dice to determine the target number. Either a success is gained if your numbers beat/exceed the target number, or a ranked success happens (count the excess). Good for determining damage. Requires more dice rolls. This type of randomizer is commonly used to compare "attack damage" vs "defense". It's good when the defender is also a player, to give some sensation of "active defense", even if it's just done through a die roll. This type of mechanic seems common in dice-based dungeon crawlers. Sequential Dice Pool Success (Warhammer) Roll a high number of dice towards target number (opposed or not). Count successes for this category of success, and resolve results. Optionally, roll the results again to determine damage (hit -> damage) or perform another operation. The difference between this type and "Beat Target Number w. Pool" is that the pools tend to be larger, the target tends to be variable, the success score tends to be non-binary, and the pools tend to be sequential. It's a very time-consuming exercise, but provides great level of detail in multiple dimensions. If these dimensions can be used to generate "playability" or "depth", it can be a successful implementation. This type of mechanic is probably most common in wargame scenarios with large armies battling. Multi-Category Dice Pool Success (World of Warcraft Board Game) Simultaneously roll one or more dice pools (colored dice, f.ex.) to determine success (on a scale, eventually beating a target number) on different parts of the combat operation, add eventual modifiers then resolve successes for the different dice pools. In the WoW example, the dice pools are "ranged/spell attack", "melee attack" and "defense" (blue, red, green), and they resolve differently based on their success rates. Melee attacks happen after retaliation, and ranged/spell attack happen before retaliation. Yahtzee Output (King of Tokyo) Roll five dice. Reroll any number of dice. Reroll any number of dice. Check results for effect(s). This has the advantage of providing some level of player control. King of Tokyo uses it well because you can not only score damage this way, but gain score and currency. There are always "things to do" and to aim for. It easily becomes the center of the game. Yahtzee Input (Dice Throne) Roll five dice. Reroll any number. Reroll any number. Check results for resources available. Use these resources to perform various abilities (or combos). Again, this provides player control. It also makes it easy to diversify the output possibilities, and create even more unique characters than in a Yahtzee output kind of game. You can trigger special abilities with different combinations of input. I guess it becomes a little bit more frustrating with the chance to miss important abilities, and it can create situations where each turn is relatively similar or relatively "swingy". Deck of Modifiers (Gloomhaven) Once your attacks (or abilities) have been chosen, flip over the top card of a deck to randomly modify the end result. You need a main input for your combat in this example - in Gloomhaven it comes from your hand of cards. It makes it easy to evolve the modifier deck over time (if it is individual) and gives some level of control over the randomness, as the distribution evens out before the deck gets reshuffled. It also has a bit of "fun" factor to it (flipping the card is fast and exciting). A disadvantage to me is that it seems to limit the output to one variable (f.ex. damage), especially once you add specialised cards. Command Decks (Game of Thrones) Each player has a deck of cards with varying numbers. For each combat encounter, a card (per player) is selected, and combined with other input factors determines the non-random but unknown total output to compare for output. Command Decks are great for PvP games, but lose some meaning in PvE. One advantage is that the command cards can contain additional bonuses, not only target numbers. For example "If you win this combat, defeat an additional enemy unit", or "Retreat safely from this combat encounter". QUESTION: What's your favorite mechanic out of these (or others) for "fun" and strategic impact? Do you have any fun stories about what happens? EDIT: Some categories have been added.
Why is it such an abysmal pain to use libraries in C++ compared to pretty much anything else?
I recently realized something that's been annoying me for so long
Type npm install 'library' in a shell on your project's directory.
How to add a library in C#:
Type dotnet add package 'library' in a shell on your project's directory.
How to add a library in Go:
Type go get 'library_link' in a shell.
How to add a library in Rust (And this is so "C++ is compiled" isn't an excuse):
Lookup the last version of the library.
Type 'library' = 'library_version' on your project file.
Restart your editor so the language server can get the symbols from the new library.
If you install cargo-edit you can alternatively just:
Type cargo add 'library' in a shell on your project's directory. cargo-edit will do everything for you.
How to add a library in C++:
Prepare two folders for header include files, and library binaries.
Append flags to your compiler to recognize them accordingly.
Investigate which way the library works, praying the documentation of that is actually good. ### If the library is header-only:
Add the header's required to your include path.
You should probably moduralize the code or spend 30 minutes setting up precompiled headers to avoid adding a lot to your compile times.
And also lower your warning level, because even if you put #pragmas around the headers editors probably wont recognize them. ### Else, if the library distributes its binaries:
Download the .lib or .a files from the last release and put them in your library folder.
Tell the compiler to link your libraries.
Put the downloaded header files in your include folder.
If the library needs a .dll, download it and paste it in the folder of your compiled executable. Distribute it with your shipped application.
If you want to keep your application as a single executable, or distribute less dependencies, or have no need for an installer, all perfectly valid reasons, library owners usually have a static version.
If they don't have a static version, spend an afternoon fighting the linker figuring out how to build the library yourself.
If you want the library to link against the static runtime, the step above is required as well.
Make sure to select the correct runtime library, or face really weird linker errors. ### If the library doesn't distribute its binaries:
Clone the repository of the library and figure out how to build it yourself. There's usually a tutorial so it's not that complicated.
Make sure to select the correct runtime library, or face really weird linker errors.
Do this for every platform you want to distribute in. ### If the library uses CMake ᵒʰ ᵍᵒᵈ ʷʰʸ:
You can choose two options: #### Use CMake too
Abandon your project's build system and spend days learning an entirely new language that everyone complains about
Probably suffer from a loss in build time #### Generate files for your compiler
Install CMake GUI
Learn how to use it and configure what you desire.
Alternatively, learn yet another command line tool, or a tiny bit of CMake syntax to change what you want.
Generate the files according to your platform and IDE.
Build the library with a bloated IDE, or alternatively research how to build it with the much less known and documented command line tools.
Make sure to select the correct runtime library, or face really weird linker errors. ### If you run into linker errors when running your program (and you will):
If it's an unresolved external symbol, most likely your library needs another dependency linked. Lookup the function's name, what library it belongs to, and link against that too.
If the unresolved function belongs to the standard library, you messed up. Compile the library again with the runtime library your compiler wants.
If it's something else, google the error code and spend 30 minutes staring at StackOverflow. # Just... why? Am I missing something? Am I stupid and doing everything wrong? I really hope that's the case so I can get back to programming instead of fighting the linker. Every single time I see I need a library I'm like "Oh fuck..." to the point sometimes I just don't bother and decide to write things myself. Sorry about the rant. I'm kinda tired. Do you all have any similar experiences with this? Any tips to ease on the pain a bit? Thank you.
What is The Best Way to Recover Scammed Bitcoin and Stolen Crypto?
What is The Best Way to Recover Scammed Bitcoin and Stolen Crypto? This is how to recover your scammed bitcoin or crypto if you have been a victim of a bitcoin scam or crypto scam. First of all, there is no shame in being the victim of one of these sophisticated and predatory operations. By coming forward you may be able to recover some or all of your lost funds and prevent the scammers from targeting others. We have extensive experience in helping clients to recoup and recover funds lost in bitcoin scams. Our team has built up knowledge and experience in the workings of these schemes and the methods they use to target investors. We also understand how professional misconduct can cause you to lose money. This puts us in a unique position with respect to helping creditors and investors to recoup their losses and in holding scammers and fraudsters to account. How to Retrieve Stolen Bitcoin We only recommend the best bitcoin recovery expert have an excellent track record in holding scammers to account including having assets frozen, even where companies have been set up using fake information. Our approach is based on close collaboration with forensic accountants and other experts, prosecution agencies (both domestic and international) to trace and recover your assets. While it may feel hopeless, there are legal avenues open to you to recover your money. These experts explore each of these and investigate the best approach to recover your investment. In most cases, the recourse is to issue a winding-up petition against the company who first elicited the investment. This enforces the debt and takes the matter to the Official Receiver. At this point, our expert team of solicitors goes to work investigating matters, with the help of other experts, to recover your funds. We will seek to recover the funds from the company or the individuals responsible. How to Get Money Back from Scammer; Bitcoin Crypto Recovery In order to pursue an individual directly for repayment of funds they’ve misappropriated, there must be evidence of some kind of wrongdoing, such as misfeasance, misrepresentation, or unjustified enrichment. If the investigative and prosecuting authorities have information on the fraudster (normally because they’ve got a criminal conviction) we work to get access to this data. It’s important to bear in mind that the civil standard of proof, which only requires to be proved on a balance of probabilities, is lower than the criminal standard of beyond reasonable doubt. As well as acting for individuals and businesses, we have also acted in class actions (also known as multi-party actions), where the same fraud has been committed against many individuals. This means that an individual unable to recover funds because of the costs and time involved can have recourse by pursuing legal action as part of a group. Recover scammed bitcoin, recover scammed crypto, recover stolen bitcoin, cryptocurrencyand more.
Hello ladies, gays, enbys, and other pots-and-pans enthusiasts and welcome to the 2019 Hyperpop Rate! I'm your host, quenched, and am here to guide you through this month's rate full of boundary-pushing, experimental, over-the-top bubblegum bass, or as it is more commonly called, hyperpop. The genre has come a long way since it's humble PC Music beginnings and has grown to boast a large cult fanbase, majority of which is made of members of the LGBTQ+ community. Here are the cling clang bitches we will be rating: In case you're impatient like me and already know the drill... HERE is the link to the Spotify playlist HERE is the link to submit scores
Up first, we have Slayyyter, queen of high-budget-sounding-but-actually-low-budget Grindrcore music, with her self-titled debut mixtape. After releasing a string of singles starting in 2018 with BFF, featuring hyperpop legend Ayesha Erotica, she has held the attention of gays and hyperpop fans everywhere, propelled by her dominating stan-like presence on social media. While not every loose single made the cut for her mixtape, she still has a versatile discography with zero misses, whether making filthy, horny bangers on songs like "Candy" and "Daddy AF", braggadocious bops "Cha Ching" and "Celebrity", or glittery bubblegum pop such as fan-favorite "Mine". Warning: you will become slightly gayer upon album completion.
This rate marks the first time in Popheads rate history we have cut an album from a rate and replaced it with another. LIZ's album "Planet Y2K" was supposed to be in the rate initially, but it came to my attention that she is a transphobic Trump supporter with NO apology or backtrack ever given. So, I posted this comment one day in a Daily Discussion post, and after 72 votes, 65% of you wanted LIZ to be replaced with 100 gecs (which honestly is better anyways musically speaking). 100 gecs are definitely one of the more well known hyperpop acts. The critically acclaimed duo are one of the few hyperpop acts to reach well beyond the LGBTQ+ audience. Consisting of Dylan Brady and Laura Les (who is trans!!!), the duo's debut album, especially money machine, has gone semi-viral within the music sphere and TikTok alike. If you can say one thing about this album, it's that you never know what to expect or what crazy sounds you're going to hear next! They also released a phenomenal remix album called "1000 gecs and The Tree of Clues", reimagining every song on this album and featuring many Popheads favorites such as Charli XCX and Kero Kero Bonito. gecgecgecgecgecgecgecgecgecgecgecgecgecgecgecgecgecgecgecgecgecgecgecgecgecgecgec
Challenging heteronormativity and the gender binary, Dorian exploded onto the scene with many loose singles, beginning with Clitopia in 2016. These singles led up to Flamboyant, an abrasive, electropop album that doesn't have a single skip! The album also features some production by Dylan Brady, who is one-half of 100 gecs, also present in this rate. Beyond the songs themselves all being bangers, lyrically Dorian explores different aspects of their sexuality and masculinity in songs such as "Emasculate", "Flamboyant", and Adam & Steve, a song which is sure to resonate which any religious gays participating in the rate. Dorian has already released their second album "My Agenda", which I also definitely recommend everyone streams after doing the rate! Note: Dorian uses they/them pronouns so I'm gonna be mad if I get any ballots using anything otherwise!
Lastly we have Hannah Diamond, who has been around the longest of the artists in this rate, releasing her first song in 2013. She was one of the first names in PC Music, taking her until 2019 to release her debut album (giving Sky Ferreira a run for her money as far as album waits go). Featuring A.G. Cook production and dreamy vocals from Hannah, this album was definitely worth the long wait!
Unfortunately for this rate, we couldn't include the queen of hyperpop, Emily Montes, as she did not debut until 2020, therefore not fitting the rate theme. At only 5 years old, she is already receiving fairly decent critical reception. She has two projects on Spotify, the self-titled debut album, Emily Montes and the also self-titled EP, Emily. Featuring experimental production, lyrics that touch on serious topics such as COVID-19 and BLM, and never-before-seen insight into a 5 year old's life, both projects are masterpieces. Despite the seemingly large amount of songs, the bonus rate only lasts 7 minutes and 47 seconds so I definitely recommend setting aside this short amount of time to participate and experience a true visionary. This part is completely optional and is just for fun. You may rate as many or as few songs as you'd like. No 0's or 11's, and and no minimum artist average. Here are the songs for the bonus rate:
My brother and I just released alpha of our open source declarative programming language (implemented in Haskell!) for writing modern web apps (i.e. React/Node/Prisma) with no boilerplate. We are still learning Haskell and would love to get your feedback / advice!
Web page: https://wasp-lang.dev Docs: https://wasp-lang.dev/docs Github repo: https://github.com/wasp-lang/wasp We have been playing with Haskell for years now, but always on the side, and this is the first bigger project we are doing in Haskell (we thought Haskell would be a good fit for compiler), so we encountered interesting challenges and we are learning a lot as we are solving them. We are mostly sticking to “Boring Haskell”, due to us still learning some of the more complex concepts, but also in order to enable less experienced Haskellers to potentially contribute to the codebase. Some of the interesting Haskell-related challenges we encountered so far:
Building distributable binaries for OSX/Linux/Win. After a lot of research (haskell helped a lot here) we set up CI to build dynamic binaries for all 3 OSes. With stack, this was all actually pretty easy. One interesting problem was https://github.com/commercialhaskell/stack/issues/848 -> no way to provide portable binary with a path to data_dir. Next step will be to build static libraries. We plan to write a blog post about our experiences soon, we will put it in haskell when it is done!
We are dealing with file paths a lot, and at some point it became hard to figure out which path is relative, absolute, file, dir … -> introducing https://hackage.haskell.org/package/path helped us a lot with this. We actually ended up building our own solution on top of `path` that moves even more information into types, we plan to publish it as a package soon (although it might be slightly niche regarding use cases).
Testing IO code, specifically mocking while testing it. I believe we identified the most common solution(s), and we used it on some parts of the IO code, but we still feel it is pretty hard to do. Probably we need to get more experience with the needed mechanisms.
Record labels/fields not scoped by record. This is well known pain, but it took us some time to research all the options and figure out the best practices. At the end we settled for prefixing labels with _ and separating a record into its own module if there are naming conflicts with other records that would cause names to be overly long / complicated. This works fine so far, and while we are looking at Lens, we haven’t started using them yet.
Running multiple external processes at the same time and streaming their output was interesting and fun, we learned about and used the `async` package (very easy to use!) and used `Data.Conduit.Process` for running a single process while streaming its stdout and stderr.
Some bigger Haskell-related things on our roadmap:
Right now the language syntax is pretty ad-hoc, and we would like to formalize language more at some point, which means probably switching from parsec to Happy. Or, at least replacing parsec with megaparsec.
Implementing proper error reporting in the generator phase of the compiler. Right now we are using `error` which is very crude, we plan to instead introduce a monad transformer stack with ExceptT in it, so we can handle errors, maybe even do some recovery and similar.
Building static binaries (or at least packaging dynamic binaries with the libraries they need to make them effectively more static -> OSX).
Figuring out a better way to work with templates. Right now we are using `mustache` package, but our surface between Haskell and templates is pretty loose type-wise, creating space for runtime errors. Plan is to either define data types that precisely describe each template, or (would be better) to find another mechanism for templates that is somehow better integrated with Haskell. We are looking at the `heterocephalus` package as a possible solution.
We are looking for alpha testers, contributors, feedback, so let us know if you would like to participate!
Is nice to see that we have users from all around the world, even if nearly 50% are from English speaking countries. Image 1.
How old are you?
The average age of a /OnePiece user is 23.62 years old. We have roughtly 10% of users that are underaged, and 10% that are 30 years old or more. Image 2.
There is no surprise there. For the others, we have some Gender Fluids, transgenders, Bigenders, quite a lot of Non-binary, a Loli, a Furry, and nearly a 100 Oden (You wish), as well as some rude people, but I won't put up what they said.
Manga or Anime?
No real surprise here either. Considering the subreddit has a lot of spoilers and is focussed around the chapter release, it's obvious there are only a few Anime Only people here. So thank you for Sticking Around, even if it the best place to avoid spoilers.
For approximately how long have you been following One Piece?
1 Year or less
10 years or more
Nearly 40% of our users have followed the series for 10 years or more. (To give an idea, this mean they followed the series since Before the timeskip, as chapter 597 was released at the end of August 2010). For the rest, we have roughly the same number of new readers that stays with the series. So it's quite good to bring new blood and not have a decrease of new readers.
Where does One Piece rank on your list of favorite manga?
Bellow Top 10
Well, you are in /OnePiece after all. So it's kinda obvious the manga is either your favorite or in your top 3. If it isn't your number 1, what series are better than One Piece for you?
Do you own One Piece Merchandise?
Those are some good numbers I would say, 55.8% of users have some merchandise and are probably supporting the series (depending on where you bought those) For the OTHERS answers given, some good ones are : autograph from dub VA of brook, Alvida pre devilfruit bodypillow, Chopper teddy bear, Sountracks, Custom made and 3D printed Keychain, Databook.
Subreddit Section :
Do you visit OnePiece mostly on mobile or on desktop?
Mobile or Apps
If you are using desktop, are you using the old version of reddit? Or the redesign?
It seems like most users are using Mobile and Apps, as well as the redesign on desktop, so it's probably time to pay more attention to that than to the old version, this way we can get banners/flair for users that are on the new version of reddit.
How often do you make : a submission on OnePiece?/Comment?/read the rules?
Very often (Daily)
Very often (Daily)
Check the rules :
Very often (Daily)
This really shows that there are a lot of lurkers on the subreddit. Most of you won't ever post or comment on the subreddit. With 8% of users creating submission and 25% commenting. As for the rules, there isn't any surprise since nearly every post respect the rules. (Only 1/5 of the post needs to be removed), so thank you to all of those that read them.
Content you enjoy the MOST/the LEAST.
Content you enjoy the most :
Media (Photo and Video)
So without surprise, people in this subreddit enjoy the Theories/discussions the most out of every type of post, it's then followed by the Fanarts. Which is good since like 75% of posts made are Discussion (50% total)/Fanarts (25% total). Content you enjoy the least:
Here there aren't any content that most users enjoy the least, but it still seems like users don't want to see that much Merchandise or Cosplay post. (Youtuber video are very rare) Also, a quick reminder, Discussion/Theories are mostly found by sorting by New. This is where you will see all of them, as it's hard for them to show up on the front page of the subreddit (but if it shows up on Hot, then it's a very good one).
Do you only use the subreddit for the Spoiler and Chapter Discussion thread?
It's nice to see that roughly 2/3 of the users are here for more than just the Spoilers and Chapter discussion. But there is still a huge part that only use the subreddit for that.
Do you want the spoilers gone from this subreddit?
As it was expected, Luffy is the Favorite Straw Hat for a lot of peopel, he's also the Straw Hat with the fewest "Least Favorite". After him Zoro is second favorite, followed by Sanji, Robin, Usopp, Brook, with the other Straw Hat having very few votes (and Nami having the Least "Favorite" Straw Hat.) After that, it seems like Chopper, Usopp, and Franky are the one people like the least out of the Straw Hat. I know it was a hard question for some of you, but the result are still interesting to know.
Which Strawhat has the saddest backstory?
The Straw Hats with the saddest backstory is Robin! Followed by Brook, then Sanji, Chopper and Nami.
What is your favorite Yonko crew?
So the favorite Emperor's crew are the Red Hair Pirate! Which is very impressive since we haven't seen much of them. I guess Oda better delivers when it come to see them in action after Wano.
Who is your favorite Admiral?
While Garp was only a Vice Admiral, he was put in the poll, and he won it! Whitout him, it's Aokiji that is the favorite, followed by Fujitora. Image 4
Who is your favorite Supernova (outside the Straw Hat)
Who else than the character that nearly managed to defeat Luffy in the 5 popularity poll? Law is the Favorite Supernova outside of the Straw Hat!
Which is your favourite canon arc in One Piece?
The Favorite Canon Story arc are (You could vote for more than 1) :
Which is your least favourite canon arc in One Piece?
The Least Favorite Canon Story arc are (You could vote for more than 1) :
Long Ring Long Land Arc
Favorite Cover Story?
Enel's Great Space Operations
From the Decks of the World : "The 500.000.000 Man Arc"
The Stories of the Self-Proclaimed Straw Hat Grand Fleet
Ace's Great Blackbeard Search
Straw Hat's Separation Seria
Character Design in One Piece :
Do you like the female character designs in One Piece?
I have no opinion.
Do you like the male character designs in One Piece?
I have no opinion.
It's true that Oda isn't the best when it comes to Female character design. However it seems like the majority of users don't have a problem with that.
Are fight a determining factor for your enjoyment of the series/arc?
Now this is rather surprising I must say. What do ou thing about this?
What is/are your (absolute) favourite aspect(s) of One Piece?
From the result we have, it seems like the World-Building is the favorite part of One Piece (With 88.6% of voters choosing this). It's followed by The Adventure (69%), The characterization (54.4%), the Inter-character relationship (49.4%), the Action (36%) and the Art Style (26.2%). And those result are obvious. Some of the most upvoted chapters of this subreddit are when we have huge world building moment, like 907 (Shanks talks to the Elders), or 957 (ULTIMATE).
On Par with Pre-TS
Better than Pre-TS
Worse than Pre-TS
This question is one of the most asked. With a lot of vocal voices saying that post TS is worse than Pre-TS. It's different for sure, but now we know how the community feels about that.
If you could eat a Devil Fruit, what type would you want?
Most people could choose to eat a Logia, and it seems like becoming a Furry is the lesser choice in this subreddit.
The Final Antagonist of One Piece will be :
With 48.5% it's Blackbeard! Really? That is surprising for me since it's obvious that Oda will make the SH fight against the World Government after they find the One Piece. And I honestly don't see Blackbeard being the final Antagonist because of that. So people who voted for this, what was your reasoning for it?
What is One Piece Biggest Flaw?
Some of the biggest flaws mentionned are :
The Lack of characters' death outside of Flashback
Which are all fair flaws to the series.
Random Questions about the Series :
As of Wano, is Jimbei stronger than Zoro?
Yes but Zoro will be stronger soon
I guess people really want Zoro to always be the second strongest no matter what... I expected this result, but I was still disappointed...
Was Zoro as strong as Luffy just after the timeskip?
I... Really? 31.5% said yes?
Will Sanji get laid by the end of the story?
Nearly the perfect split, and it's easy to see why it's very divisive. (Also shows that every vote counts).
Will Usopp be part of the 1 Billion Club by the end of the story?
The Straw Hats will go to Laugh Tales :
Before fighting the WG
After Fighting the WG
It's been hinted at a lot that the SH will go to Laugh Tales before taking on the WG. So for me it feels rather strange to have more than 1/4 voting for them reaching the final island after.
Who will be the one to defeat Kaido? (So give the last hit)
With 66.3% of the votes the one who will give the last hit to Kaido is : Luffy! Followed by 11.5% with someone else (that isn't Law/Kid/Zoro/Big Mom/Scabbard/Admiral) and 11% by one of the Scabbard. Zoro received 6.4% of the votes.
Who will be the first SH to realize their dream?
Most users believe that Usopp will be the first one to realize his dream! I also think the same as it's the easiest Dream to realize really. I could bet you it will happen in Elbaf. After that, we have Luffy and Robin, and it make sense since their dreams are linked. Both can be done once they reach Laugh Tales.
How many members will the crew have at the end? (With Luffy)
And most people want 11 members total in the crew! (With 28.6%), 27.5% wants 12 members, wile 19.8% want the crew to be complete right now with Jinbe.
Who do you think wins in a 1v1 : An Emperor or an Admiral?
If you are active on the subreddit, you know it's one of the question that creates the most discussion/arguments about. So it's nice to know the overall opinion of the subreddit on this question (Doesn't mean it's always correct mind you).
Is Mihawk emperor's level?
Also a very divisive question on this subreddit.
Is Aokiji emperor's level?
Is Akainu emperor's level?
So they fight for 10 days in a very close battle. With Akainu winning in the end, but after a long and hard fight. And one is Emperor's level while the other isn't? Really? I find that hard to understand.
If Oden was alive would he be stronger than Mihawk
How strong was Oden at the time of his death?
< Top 20
I like Oden, but sometimes I feel like people are overestimating him.
Who is stronger between Shanks and Mihawk?
This is also one of the question creating the most arguments on this subreddit, after all Mihawk is the World Strongest Swordman. But Shanks is an Emperor and became one after losing his arm.
Is Kaido stronger now that 20 years ago?
Yes, he's stronger
Had Ace survived, would Wano be liberated by now?
Could the Marines take on ALL the Yonko at the same time ?
Yes in Marineford only
2 at the same time
3 at the same time
This question is also linked to how you see the Emperor vs Admiral. So depending on which side you are on, you are more likely to pick Yes or No.
Which character do you want focus on next?
All very good choices, and all of them are character we have known for a long time without really knowing.
Will Blackbeard find the One Piece before Luffy?
How strong is Monkey D. Dragon?
< Top 10
Here, most people seems to think that Luffy's father, Garp's son is part of the strongest characters of the series. Oda better respond to our expectations then. As for his Bounty : Well, 31.6% think it will be more than 6 Billions and 28.1% think it will be between 5-6 billions. That remind me, I once made a poll asking people what Sabo's bounty would be (since we knew it was getting revealed in a magazine soon). So maybe I will do the same for Dragon? That could be nice.
Who is currently the strongest Emperor?
I wonder if the recent chapters made people change their perception on this...
What are the fights you would want to see?
Blackbeard vs Shanks
Garp vs Rocks
Garp vs Roger
Mihawk vs Shanks
Akainu vs Aokiji
How long do you think One Piece has left? (At a rate of 40 chapters a year)
Image 5. As you can see, most people think One Piece has at least 5 years left to go on. We will know Oda is terrible with respecting his own objectives. And this is good. The more One Piece the better.
On a scale from Spandam to Whitebeard/Roger, How strong is Im?
For this question, it seems like most people put Im at the same level as Whitebeard/Roger with 28.6% voting Im being there. I honestly don't know how strong I want Im to be.
What arcs, after Wano, do you want?
The arcs people want the most are :
The Final War
Red Hair Pirates
So arcs teased for years (Elbaf/Laugh Tales/Final War) and about character that people want to see (Vegapunk/Red Hair pirates).
How is Blackbeard able to use multiples Devil Fruits?
More than 1 soul
It's one of those question were people have very different opinion about. And right now there isn't really a major concensus in the fandom, even if the theory about it being related to the Yami Yami is more popular. In the Other catergory, there was the Cerberus Devil Fruit option, Blackbeard being a Triplet, him being actually 2+ kids in a trenchcoat, him being a failed Vegapunk experiment, having several stomachs him being pregnant (Stop reading fanfiction), him putting the power inside his rings, being a great guy and him being a cunt.
Haki is :
Image 6 Overall, People like Haki in the series, with a 4.38 out of 5!
How many arcs are left after Wano?
Image 7 Here, it seems like the answer for the community would be 4-5 arcs left. Which would then make (base don the How long One Piece has left), like a year per arc on average.
The final war of One Piece will be :
SH+RA vs WG+Marines vs BB
SH+RA vs WG+Marines
SH vs RA vs WG
I just don't see Blackbeard being in the final war, as my opinion is that he will be dealt with before it. For the other answers, there was Straw Hats vs Blackbeard Pirates, Family of D vs vs im sama, Total civil war in marines, Straw Hat vs Shanks, Straw hat vs Pound, Zoro vs World Goverment, Dugongs vs buggy.
Will Luffy die at the end of One Piece?
Will Luffy die?
An ending were Luffy died wouldn't be a good ending for me. He needs to survive and go on more adventures.
Are Shakky and Rayleigh Mihawk's parent?
Will the crew still be together at the end of the series?
Yes, they will keep going on adventure together.| 57.6% o, they will move on, like the Roger Pirates| 42.4% Like with Luffy living, I want the Crew to stay together, and sail together for many more adventures. I could see them taking breaks from time to time, but them staying together would be the best ending for me.
Can the Red Line be destroyed with Ancient Weapons?
What is the biggest mystery left to be revealed?
The most common answers were : The Void Century, the Will of D, Im, The One Piece, Joy Boy, Luffy's mother and Who is Pandaman?
What is the One Piece?
Here, there was plenty of : "No idea", The friends we made along the way, a Devil Fruit, Knowledge, Uranus, History, a book, my mom.
What sort of Devil Fruit do you want to see in the story?
The most common answer was : Water Logia! Followed by Wind Logia and people wanting more mythical Zoans.
What is the craziest theory you believe?
Here are a few of them :
Shanks is a Celestial Dragon
That Vegapunk is going to flip a switch in the Pacifista programming to fight the marines at the end.
Luffy's mom was a celestial dragon
Devil fruits are all artificial from the void century
That Finland doesn't exist
Zoro is going to get Rodger's disease
D's were the original Celestial Dragons
Weevil was made by Vegapunk using Whitebeard's cells and then was discarded until Bakkin picked him up
One of the Roger Pirates (probably Scopper Gaban) is on Laugh Tale waiting for whoever finds it, sort of like how Crocus and Rayleigh seem to be positioned to monitor rookie pirates
Onigashima is an Oarz like skeleton and Big Mom is gonna bring it to life.
The different races came from other planets/moons
Tama is a Kurozomi
Ussop is a descendant of Mont Blanc Nolan
Luffy hatched from an egg.
The fish that bit Shanks's arm off was Joyboy's pet
Bon chan is Kin'emon's son
Oda no longer draws the manga
bonney and ace having a child
That Perospero is going to help kill Big Mom.
Dragon being former Admiral
What are your favorites? And here it is, the 500K survey! Took me far too long to make, as I underestimated the time needed to sort the answer and create this post. Like damn. I hope you enjoyed it. The anwers for the Survey Saga will be up next in some time.
Understanding Binary Options . At least on the surface, binary options are structured just like a $100 bet on a football game: You buy the team you like, or you sell the team you don’t like. Binary options brokers will generally have their trading platform open when the market of the underlying asset is open. So if trading the NYSE, Nasdaq, DOW or S&P, the assets will be open to trade during the same hours as those markets are open. Any moves by the Federal reserve for example, will feed into binary markets immediately, just as you would expect. Forex trading has no central market ... Comparing the probabilities, a trader could surmise there is 20.5 percent greater probability that price will close below 9667. Probabilities are easy to calculate with Nadex Binary Options ... The probability of OTM can be calculated by subtracting the probability of ITM from 100: 1 – Probability of ITM = Probability of OTM. This can also be used to get an idea of what the market expects from an asset’s price. Furthermore, this is the probability to look at when selling options. When selling options, you want the sold options to ... If trading binary options you don’t need to worry about stops and profit targets, although you will need to choose an expiry time. Choose an expiry time that is about 5 to 7 “bars” away from your entry time. For example, if you are monitoring a 1 hour chart, your expiry should be 5 to 7 hours away (5 to 7 bars). This gives enough time for the price to follow-through, but not enough time ... Calculating Probability . Since binary options are time-bound and condition-based, probability calculations play an important part in valuing these options. It all boils down to “What is the ... Binary options traders can take advantage of this repetitive and high-probability action by fading the move when price moves outside of the Bollinger bands. Again, the number of pips that price moves is not of concern for binary options traders but simply that the options close in the anticipated direction. Binary Options: Fundamental Skills To Dominate Binary Options (Trading,Stocks,Day Trading,Binary Options) (English Edition) Conquer 60 Second Binary Options Trading: A High Probability Technical Blueprint for Success (English Edition) Auf welche Punkte Sie als Kunde beim Kauf Ihres Binary trading Acht geben sollten Sämtliche in dieser Rangliste gezeigten Binary trading sind jederzeit in ... If the stock, stock options, or combination becomes profitable before the end of the trading period (for example, before the expiration of some stock options), it is reasonable that a trader may decide to reap part or all of those profits at that time. The Probability Calculator gives the likelihood that prices are ever exceeded during the trading period, not just at the end. Click here for ... Binary Options: Fundamental Skills To Dominate Binary Options (Trading,Stocks,Day Trading,Binary Options) (English Edition) Conquer 60 Second Binary Options Trading: A High Probability Technical Blueprint for Success (English Edition) Options Trading Strategies : The only Guide on Advanced Day, Swing and Binary Strategies you need to Trade in Stock Market, Start Now your Journey to Become a ...
Binary Options Stratagies: Binary Options With Martingale ...
1️⃣ Binary Options: binary options strategy, highest probability trading setups, iq option part 35 - Duration: 20:11. BO Turbo Trader 1,935 views. 20:11. Bitcoin Relief Rally - Careful Here ... Binary options: My trading on binary options using martingale strategy. Safe binary option trading tutorial. ★ Register and trade with me http://binaral.c... #eagle price action trading #probability in trading #improve winning ratio #iq option Probabilities In Trading- Binary Option// Probability Mindset-Success G... 60 seconds binary options high probability strategy Part 3 A - Duration: 11:31. Corney Snyman 1,119 views. 11:31. Day Trading Rules - Secret to Using Fibonacci Levels - Duration: 8:41. ... in this channel a lot to talk about trading strategies. like the following important points that traders should know. including: 1. how to read good trends 2... Probabilistic Binary Options Signals Indicator 90% win-rate - Live Trading Session! https://www.altredo.com. "Binary options are not promoted or sold to retail EEA traders. If you are not a professional client, please leave this page." Reliable Binary Options Broker with a ★Profit of up to 95%★! Hi there! Japanise Martingale Binary Option Strategy: https://goo.gl/xqs6sk OlympTrade – Binary Option Broker: https://goo.gl/xqs6sk I'd like to tell you abo... How to earn and make money in the best way and easy for everyone By working at Home. this is 100% online work. anyone can do it as well as for beginners though. who are just learning to make money ...