2020年2月25日火曜日

The Inspirations Of Oceanhorn 2: Knights Of The Lost Realm - Part 3

For this last installment of our Inspirations series, we had the chance to sit down with Heikki Repo, Creative Director of Cornfox & Brothers. Heikki, one of the founders of the company, is responsible for the overall vision and story of the Oceanhorn saga.




"When talking about influences, we need to differentiate between the inspiration for the whole series, and those specific to Oceanhorn 2: Knights of the Lost Realm," he says.


Upon the release of the first iPhone, the whole studio was excited to know everyone will have a powerful gaming machine in their pockets. At the time, the only RPGs on the device where some fairly obscure Korean-style action games – no RPGs in the vein of Zelda or Secret of Mana were available.


"Some of the games I hold most dear from my childhood were portable," says Heikki, "two of my favorites are Link's Awakening and Mystic Quest – Final Fantasy Adventure (Seiken Densetsu). I love them because they could combine the portable experience with extremely high-quality content. Mystic Quest, for example, uses a real myth (think Excalibur) and builds its story upon it. It also has a lot more drama than Zelda – a quite peculiar trait for those years."





Oceanhorn, since the beginning, was planned as Cornfox's own RPG franchise: an homage to the classics with its own personality. Versatility and gameplay experimentation were the keywords the company used as a guiding principle during the development of the first chapter.


"The first Oceanhorn is undeniably a Zelda-like, but we have XPs, and the story becomes increasingly dramatic towards the end – that's not something you'd expect, for instance, from a Zelda game. These ambitions carry on to the second game as well. When it comes to the actual plot, I think I've been deeply influenced by Final Fantasy VI, VII, and IX: they never take shortcuts, and everything that happens there is the outcome of very thoughtfully laid out worlds and events. What actually goes down in the games is the natural consequence of what already had happened before."






The story told in Oceanhorn 2: Knights of the Lost Realm is the background story of the first Oceanhorn: an opportunity to lay strong foundations for the saga, add more details, and create a universe that will keep making sense for potential new projects as well. "The production phase of Oceanhorn 2 brought everything into focus. Certain story elements were a bit vague, and I think we managed to handle them quite well in Oceanhorn 2."


Visually, Oceanhorn 2 will be an inviting, colorful game. Here the references are, again, Zelda and the Mana series: while its approach is console-style, the game will feature some dark undertones.
"Oceanhorn was developed by three people", says Heikki, "me, Antti, and Jukka. It was a 15-20 hours game, so it was a huge undertaking for so few people, but we managed to squeeze in cinematics and most of what you'd expect from an RPG. At the time I was playing The Last Story, Hironobu Sakaguchi's game for Wii." Sakaguchi had previously delivered Lost Odyssey, Xbox 360's own 'Final Fantasy'. The Last Story, developed in collaboration with Nintendo, wasn't destined, for obvious reasons, to set a new graphical standard, but the gameplay was something truly inspiring. "I saw that game as Sakaguchi's idea of where to take the genre's next: he focused on the feeling of presence, with party members talking to each other during gameplay, and an unprecedented possibility to use the environment to your advantage. The story wasn't limited to cinematics, but brought directly to the levels."


Energized by The Last Story, Heikki decided Oceanhorn 2: Knights of the Lost Realm would be a third person experience, with multiple party members.  




"I don't mind when people make comparisons with Skyward's Sword or Breath of the Wild, it means we're giving out the right vibes. If you compare screenshots from Call of Duty and Battlefield it might not always be obvious which one is which, but when you get to play, these games feel quite different. The same is true if you compare Oceanhorn to Zelda or Xenoblade Chronicles – they provide similar experiences but each in its own unique way."


One more saga that had an impact on Oceanhorn 2: Mass Effect. "After I played the Mass Effect Trilogy, I realized how the characters companionship and the way they explore the planets made those games great. I think that that, combined with the Zelda-like heritage of the first Oceanhorn, is what makes Knights of the Lost Realm special", Heikki concludes.

---

Want to read these updates before anyone else? Subscribe to our newsletter.

2020年2月24日月曜日

Bimonthly Progress Report For My Twitch Channel, FuzzyJCats, Sept 2 To November 1

FuzzyJCats Twitch Channel

It's going into December, so as usual, I've been procrastinating because there hasn't been any major changes at the level of no longer caring about viewer numbers. Albeit there are times when I have my neuroses and insecurities about numbers, though I was able to get over it after processing with my best friend, Todd

Because that breakthrough was huge, I felt I wasn't making any monumental improvements, except for taking 15 minute breaks after 2 hours of streaming, which helped me to last an extra 2 hours or so, getting in the much needed practice without fatigue.

I didn't think about taking breaks because I see my streamer friends stream 12 hours straight without any breaks. And the meme in Twitch is stream until you drop to gain viewers. I would stream until I couldn't focus any longer (normally around 2 hours) and stop. 

I stopped typing my streamer friends' link as I noted the emotional issues and stress it was causing me. It's so easy to forget to shout someone out that if you don't do so, you're concerned if the person felt slighted. Therefore, I'm only shouting out when being hosted and raided. Further, having excessive shoutouts made the chat harder to read, and I wanted a cleaner interface.

Since this progress report was long overdue and it was in the back of my mind, I was wondering what else can I do in the meantime to take streams to the next level? The answer has to go back to the basics - what do I want to achieve in streaming? Because if I know what I want, I can find out ways to accomplish that goal. Clearly, to "git gud" but specifically what is that? 

This is where the cliches of two heads are better than one, and how you can achieve anything with friends ring true, even as it makes everyone cringe when they hear that.

I kept asking the smotpoker887 extraordinaire how can I improve over and over again, but I wasn't sure what I wanted to accomplish in streaming. After hearing my neurotic rant, Smot merely asked, "why not be the best friend you can possibly be" from streaming.

That is what I wanted to accomplish! This is not too hard because you easily get to know your viewers - by remembering the past stream chats and talking to them through any of the social media messaging - so that when they show up, you can ask how is their house coming along (only if they mentioned that publicly to respect privacy).

Because I don't have photographic memory and we miss a lot of chat while streaming, I've been using Chatty to review the chat logs - this helps remind me of what was said in stream so I can get to know my new viewers better. Thanks to Smot, he explained how I can upload these logs to Google drive since it was hard to read on the potato PC. I can then read these logs anywhere I have access to internet.


Because I was working on being more friendly and engaging, I didn't have as much gameplay (this will improve through practice). As soon as I notice, I say hi as soon as a viewer shows up, but I forgot how I was to focus on then going back to what I was talking about, which takes a lot of mental focus.

I wasn't conscious of using that strat last month. Writing this progress report is quite helpful to concretely remind myself to be less tangential - which is why I want to be more timely in these bimonthly progress reports.

The discussion with Smot occurred maybe 2 months ago, and I got lulled into complacency as we all do as I focused on being more engaging with viewers.

However, recently, I wanted to see how I can be more entertaining: being a friend, but being an entertaining friend, which I think will take streaming to the next level, especially as it's an entertainment media.

After having two sleepless nights, I then talked with my best friend Todd who helped me to be more specific in what I mean by being entertaining. I told him that I wanted to be socially engaging. However, he mentioned the eye-opening reality that hearing another person's conversation may not be entertaining. Saying hello to viewers one after the other is not the most riveting or compelling conversation after all and most likely, only interesting to the person you're addressing.

After clarifying what I wanted, he mentioned the radio broadcasting 101 basics. This was rather shocking considering when you search how to be an entertaining Twitch streamer, no one wrote about this, but this is the most basic thing to do as an entertainer! In other words, that is how behind Twitch is compared to other forms of entertainment. 

Todd mentioned that I can write down the stories I want to tell and rehearse before each stream. After he said that, my immediate thought was "wow, that's so basic!" even as I didn't think about rehearsing. Because we all hear about how much entertainers rehearse out loud, spending hours a day honing their skills.

I noticed that when I have ideas to say while streaming, I even rehearse it in my mind, but when the time comes, I'm too inhibited to actualize how I envisioned it, and it didn't come out as colorful as I wanted it to and falls flat. I also noticed that I wanted to expand on conversational threads, but I hold back for fear of burdening the listener (growing up in the New England area, children were treated as to be seen but not heard). I know exactly why I do these things, but knowing is the easy part, changing is the challenge.s

Therefore, I have to do "inner work", accepting myself and not caring about "acting the fool" on stream for fear of viewers thinking negatively of me. Cognitive Behavioral Therapy (CBT) here can work, because what's the worse that can happen if I'm able to rehearse and then act the way I envision the story, uninhibited? The absolute worse is that the viewers think I'm stupid or a loser or bad at acting (which I already know that I am), but who cares? If someone actually writes that and means it (i.e. a true troll, my viewers tease me affectionately on stream) during stream, then ban.

I'm also working on self-compassion - accepting yourself unconditionally - so you don't judge yourself (leads to inhibition ) or others (pinched soul).

Writing down a full-fledged "script" and rehearsing it aloud, practicing may help me to be less uninhibited and perform the way I want it to. I can even force Todd to watch. It'll be an exciting adventure to see if these preparations will significantly improve the entertainment value of the stream!

Goals Achieved:
  1. 15 minute breaks = longer streams = more practice
  2. No more excessive shoutout commands = less stress, cleaner chat
  3. Be a friend (first priority) and easier ways of reading chat logs
    1. Be more diligent about reading chat logs
  4. Realization of rehearsing scripts
Improvements to be made aside from the above:
  1. Make sure I work on the bimonthly progress report as it solidifies what I'm supposed to be working on, and forces me to find out what other things I can improve.
  2. More gaming action and fluency as per usual.
The How of Happiness Review

2020年2月21日金曜日

Tech Book Face Off: Data Smart Vs. Python Machine Learning

After reading a few books on data science and a little bit about machine learning, I felt it was time to round out my studies in these subjects with a couple more books. I was hoping to get some more exposure to implementing different machine learning algorithms as well as diving deeper into how to effectively use the different Python tools for machine learning, and these two books seemed to fit the bill. The first book with the upside-down face, Data Smart: Using Data Science to Transform Data Into Insight by John W. Foreman, looked like it would fulfill the former goal and do it all in Excel, oddly enough. The second book with the right side-up face, Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow by Sebastian Raschka and Vahid Mirjalili, promised to address the second goal. Let's see how these two books complement each other and move the reader toward a better understanding of machine learning.

Data Smart front coverVS.Python Machine Learning front cover

Data Smart

I must admit; I was somewhat hesitant to get this book. I was worried that presenting everything in Excel would be a bit too simple to really learn much about data science, but I needn't have been concerned. This book was an excellent read for multiple reasons, not least of which is that Foreman is a highly entertaining writer. His witty quips about everything from middle school dances to Target predicting teen pregnancies were a great motivator to keep me reading along, and more than once I caught myself chuckling out loud at an unexpectedly absurd reference.

It was refreshing to read a book about data science that didn't take itself seriously and added a bit of levity to an otherwise dry (interesting, but dry) subject. Even though it was lighthearted, the book was not a joke. It had an intensity to the material that was surprising given the medium through which it was presented. Spreadsheets turned out to be a great way to show how these algorithms are built up, and you can look through the columns and rows to see how each step of each calculation is performed. Conditional formatting helps guide understanding by highlighting outliers and important contrasts in the rows of data. Excel may not be the best choice for crunching hundreds of thousands of entries in an industrial-scale model, but for learning how those models actually work, I'm convinced that it was a worthy choice.

The book starts out with a little introduction that describes what you got yourself into and justifies the choice of Excel for those of us that were a bit leery. The first chapter gives a quick tour of the important parts of Excel that are going to be used throughout the book—a skim-worthy chapter. The first real chapter jumps into explaining how to build up a k-means cluster model for the highly critical task of grouping people on a middle school dance floor. Like most of the rest of the chapters, this one starts out easy, but ramps up the difficulty so that by the end we're clustering subscribers for email marketing with a dozen or so dimensions to the data.

Chapter 3 switches gears from an unsupervised to a supervised learning model with naïve Bayes for classifying tweets about Mandrill the product vs. the animal vs. the Mega Man X character. Here we can see how irreverent, but on-point Foreman is with his explanations:
Because naïve Bayes is often called "idiot's Bayes." As you'll see, you get to make lots of sloppy, idiotic assumptions about your data, and it still works! It's like the splatter-paint of AI models, and because it's so simple and easy to implement (it can be done in 50 lines of code), companies use it all the time for simple classification jobs.
Every chapter is like this and better. You never know what Foreman's going to say next, but you quickly expect it to be entertaining. Case in point, the next chapter is on optimization modeling using an example of, what else, commercial-scale orange juice mixing. It's just wild; you can't make this stuff up. Well, Foreman can make it up, it seems. The examples weren't just whimsical and funny, they were solid examples that built up throughout the chapter to show multiple levels of complexity for each model. I was constantly impressed with the instructional value of these examples, and how working through them really helped in understanding what to look for to improve the model and how to make it work.

After optimization came another dive into cluster analysis, but this time using network graphs to analyze wholesale wine purchasing data. This model was new to me, and a fascinating way to use graphs to figure out closely related nodes. The next chapter moved on to regression, both linear and non-linear varieties, and this happens to be the Target-pregnancy example. It was super interesting to see how to conform the purchasing data to a linear model and then run the regression on it to analyze the data. Foreman also had some good advice tucked away in this chapter on data vs. models:
You get more bang for your buck spending your time on selecting good data and features than models. For example, in the problem I outlined in this chapter, you'd be better served testing out possible new features like "customer ceased to buy lunch meat for fear of listeriosis" and making sure your training data was perfect than you would be testing out a neural net on your old training data.

Why? Because the phrase "garbage in, garbage out" has never been more applicable to any field than AI. No AI model is a miracle worker; it can't take terrible data and magically know how to use that data. So do your AI model a favor and give it the best and most creative features you can find.
As I've learned in the other data science books, so much of data analysis is about cleaning and munging the data. Running the model(s) doesn't take much time at all.
We're into chapter 7 now with ensemble models. This technique takes a bunch of simple, crappy models and improves their performance by putting them to a vote. The same pregnancy data was used from the last chapter, but with this different modeling approach, it's a new example. The next chapter introduces forecasting models by attempting to forecast sales for a new business in sword-smithing. This example was exceptionally good at showing the build-up from a simple exponential smoothing model to a trend-corrected model and then to a seasonally-corrected cyclic model all for forecasting sword sales.

The next chapter was on detecting outliers. In this case, the outliers were exceptionally good or exceptionally bad call center employees even though the bad employees didn't fall below any individual firing thresholds on their performance ratings. It was another excellent example to cap off a whole series of very well thought out and well executed examples. There was one more chapter on how to do some of these models in R, but I skipped it. I'm not interested in R, since I would just use Python, and this chapter seemed out of place with all the spreadsheet work in the rest of the book.

What else can I say? This book was awesome. Every example of every model was deep, involved, and appropriate for learning the ins and outs of that particular model. The writing was funny and engaging, and it was clear that Foreman put a ton of thought and energy into this book. I highly recommend it to anyone wanting to learn the inner workings of some of the standard data science models.

Python Machine Learning

This is a fairly long book, certainly longer than most books I've read recently, and a pretty thorough and detailed introduction to machine learning with Python. It's a melding of a couple other good books I've read, containing quite a few machine learning algorithms that are built up from scratch in Python a la Data Science from Scratch, and showing how to use the same algorithms with scikit-learn and TensorFlow a la the Python Data Science Handbook. The text is methodical and deliberate, describing each algorithm clearly and carefully, and giving precise explanations for how each algorithm is designed and what their trade-offs and shortcomings are.

As long as you're comfortable with linear algebraic notation, this book is a straightforward read. It's not exactly easy, but it never takes off into the stratosphere with the difficulty level. The authors also assume you already know Python, so they don't waste any time on the language, instead packing the book completely full of machine learning stuff. The shorter first chapter still does the introductory tour of what machine learning is and how to install the correct Python environment and libraries that will be used in the rest of the book. The next chapter kicks us off with our first algorithm, showing how to implement a perceptron classifier as a mathematical model, as Python code, and then using scikit-learn. This basic sequence is followed for most of the algorithms in the book, and it works well to smooth out the reader's understanding of each one. Model performance characteristics, training insights, and decisions about when to use the model are highlighted throughout the chapter.

Chapter 3 delves deeper into perceptrons by looking at different decision functions that can be used for the output of the perceptron model, and how they could be used for more things beyond just labeling each input with a specific class as described here:
In fact, there are many applications where we are not only interested in the predicted class labels, but where the estimation of the class-membership probability is particularly useful (the output of the sigmoid function prior to applying the threshold function). Logistic regression is used in weather forecasting, for example, not only to predict if it will rain on a particular day but also to report the chance of rain. Similarly, logistic regression can be used to predict the chance that a patient has a particular disease given certain symptoms, which is why logistic regression enjoys great popularity in the field of medicine.
The sigmoid function is a fundamental tool in machine learning, and it comes up again and again in the book. Midway through the chapter, they introduce three new algorithms: support vector machines (SVM), decision trees, and K-nearest neighbors. This is the first chapter where we see an odd organization of topics. It seems like the first part of the chapter really belonged with chapter 2, but including it here instead probably balanced chapter length better. Chapter length was quite even throughout the book, and there were several cases like this where topics were spliced and diced between chapters. It didn't hurt the flow much on a complete read-through, but it would likely make going back and finding things more difficult.

The next chapter switches gears and looks at how to generate good training sets with data preprocessing, and how to train a model effectively without overfitting using regularization. Regularization is a way to systematically penalize the model for assigning large weights that would lead to memorizing the training data during training. Another way to avoid overfitting is to use ensemble learning with a model like random forests, which are introduced in this chapter as well. The following chapter looks at how to do dimensionality reduction, both unsupervised with principal component analysis (PCA) and supervised with linear discriminant analysis (LDA).

Chapter 6 comes back to how to train your dragon…I mean model…by tuning the hyperparameters of the model. The hyperparameters are just the settings of the model, like what its decision function is or how fast its learning rate is. It's important during this tuning that you don't pick hyperparameters that are just best at identifying the test set, as the authors explain:
A better way of using the holdout method for model selection is to separate the data into three parts: a training set, a validation set, and a test set. The training set is used to fit the different models, and the performance on the validation set is then used for the model selection. The advantage of having a test set that the model hasn't seen before during the training and model selection steps is that we can obtain a less biased estimate of its ability to generalize to new data.
It seems odd that a separate test set isn't enough, but it's true. Training a machine isn't as simple as it looks. Anyway, the next chapter circles back to ensemble learning with a more detailed look at bagging and boosting. (Machine learning has such creative names for things, doesn't it?) I'll leave the explanations to the book and get on with the review, so the next chapter works through an extended example application to do sentiment analysis of IMDb movie reviews. It's kind of a neat trick, and it uses everything we've learned so far together in one model instead of piecemeal with little stub examples. Chapter 9 continues the example with a little web application for submitting new reviews to the model we trained in the previous chapter. The trained model will predict whether the submitted review is positive or negative. This chapter felt a bit out of place, but it was fine for showing how to use a model in a (semi-)real application.

Chapter 10 covers regression analysis in more depth with single and multiple linear and nonlinear regression. Some of this stuff has been seen in previous chapters, and indeed, the cross-referencing starts to get a bit annoying at this point. Every single time a topic comes up that's covered somewhere else, it gets a reference with the full section name attached. I'm not sure how I feel about this in general. It's nice to be reminded of things that you've read about hundreds of pages back and I've read books that are more confusing for not having done enough of this linking, but it does get tedious when the immediately preceding sections are referenced repeatedly. The next chapter is similar with a deeper look at unsupervised clustering algorithms. The new k-means algorithm is introduced, but it's compared against algorithms covered in chapter 3. This chapter also covers how we can decide if the number of clusters chosen is appropriate for the data, something that's not so easy for high-dimensional data.

Now that we're two-thirds of the way through the book, we come to the elephant in the machine learning room, the multilayer artificial neural network. These networks are built up from perceptrons with various activation functions:
However, logistic activation functions can be problematic if we have highly negative input since the output of the sigmoid function would be close to zero in this case. If the sigmoid function returns output that are close to zero, the neural network would learn very slowly and it becomes more likely that it gets trapped in the local minima during training. This is why people often prefer a hyperbolic tangent as an activation function in hidden layers.
And they're trained with various types of back-propagation. Chapter 12 shows how to implement neural networks from scratch, and chapter 13 shows how to do it with TensorFlow, where the network can end up running on the graphics card supercomputer inside your PC. Since TensorFlow is a complex beast, chapter 14 gets into the nitty gritty details of what all the pieces of code do for implementation of the handwritten digit identifier we saw in the last chapter. This is all very cool stuff, and after learning a bit about how to do the CUDA programming that's behind this library with CUDA by Example, I have a decent appreciation for what Google has done with making it as flexible, performant, and user-friendly as they can. It's not simple by any means, but it's as complex as it needs to be. Probably.

The last two chapters look at two more types of neural networks: the deep convolutional neural network (CNN) and the recurrent neural network (RNN). The CNN does the same hand-written digit classification as before, but of course does it better. The RNN is a network that's used for sequential and time-series data, and in this case, it was used in two examples. The first example was another implementation of the sentiment analyzer for IMDb movie reviews, and it ended up performing similarly to the regression classifier that we used back in chapter 8. The second example was for how to train an RNN with Shakespeare's Hamlet to generate similar text. It sounds cool, but frankly, it was pretty disappointing for the last example of the most complicated network in a machine learning book. It generated mostly garbage and was just a let-down at the end of the book.

Even though this book had a few issues, like tedious code duplication and explanations in places, the annoying cross-referencing, and the out-of-place chapter 9, it was a solid book on machine learning. I got a ton out of going through the implementations of each of the machine learning algorithms, and wherever the topics started to stray into more in-depth material, the authors provided references to the papers and textbooks that contained the necessary details. Python Machine Learning is a solid introductory text on the fundamental machine learning algorithms, both in how they work mathematically how they're implemented in Python, and how to use them with scikit-learn and TensorFlow.


Of these two books, Data Smart is a definite-read if you're at all interested in data science. It does a great job of showing how the basic data analysis algorithms work using the surprisingly effect method of laying out all of the calculations in spreadsheets, and doing it with good humor. Python Machine Learning is also worth a look if you want to delve into machine learning models, see how they would be implemented in Python, and learn how to use those same models effectively with scikit-learn and TensorFlow. It may not be the best book on the topic, but it's a solid entry and covers quite a lot of material thoroughly. I was happy with how it rounded out my knowledge of machine learning.

Guild Ball New Resin Models Review

So after Captain Con all the Guild Ball talk from my buddies got to me and I'm completely hyped for the game all over again.  It's odd, it was roughly around this time last year that I had the same thing happen to me as I go back and look through the blog posts.  I guess I just have to accept that my love for Warmachine and Guild Ball will wax and wane for seemingly no reason.

But since I am interested in the game, I wanted to pick up the minor guilds for the main guilds that I already own.  This meant picking up Navigators and Miners. What's more is that Steamforged surprised us all and released new captains for 4 of the original guilds, one of which is for a team I own: Yukai on Fishermen.

Since I had to skip out on game night tonight, but I did get time to build the models which arrived I figured I'd put up a review of sorts.


It wouldn't be Steamforged if there wasn't some kind of oddity with how the models were being released.

The good news was that the Captains were being released individually, not in a big box that had models for various guilds! Sweet!

The bad news was that it was basically direct only! Boo!

I tried to order through my FLGS, but when they had setup their retail account on SFG's website, they were effectively ordering at cost which seemed...odd?  As it ended up, they couldn't order for me since the models I wanted were out of stock at the time and since they weren't making any money on the sale it seemed like a waste to try and line up when the models would pop back in stock AND I'd have to get over to my FLGS to effectively put a web order in for me that they weren't even going to get a cut of.  In the end, I simply placed my order through SFG once the models popped back into stock a day or so later.

The last odd bit was that the Miners guild and all the new captains were being released in resin.  I have not yet really seen any of SFG's resin minis. The rationale according to SFG was that Miners had such big models that it made sense to use resin, and they were obviously having problems getting the PVC models out of manufacturing in China.

That last part is a bit of a shame since I'm an absolute huge fan of SFG's PVC models.  I have Blacksmiths and now Navigators and the fact that you open the box and can use them with zero assembly or fuss is amazing. The sculpts are solid IMO and they've painted up nicely so far.

So what about the resins? Well, it's a bit of a mixed bag in terms of quality. It seems like the original Games Workshop Finecast models when they originally released. It feels like a kind of soft/fragile resin.

Here's my assembled set of new models:



The models don't look particularly bad, especially not 3 feet away, though there are a few rough bits. 

The first issue I had was stability. When I had seen my friends Yukai model at game night last week, they were kind enough to let me play a game with it to try out their new rules. What immediately struck me was how light and flimsy the model felt. It doesn't help that the model attaches to the base via a single leg connection. 

As you can see below, both Yukai and Spade both use the single leg connection. It looks cool, but that's just touchy and is begging to come off the base.  



My solution was pinning through the leg as deep as I could manage without damaging the model. I then pinned through the base itself, which is tad shorter height wise than the usual base height you'd expect.

In both cases I went right through the base immediately and after pinning through I had to cut off the paperclip pin and then file down the bottom to make it flush. You don't want to go tearing up the nice neoprene mats we play on.

Once I did pin through the leg and base, the models definitely feel a bit more solid. I highly recommend it. 

In terms of quality, I'm conflicted. Yukai looks good, but the model had a bit of odd flash that had to be shaved off the chin/neck which was a bit tight. 

There were some definite issues with Fissure (the Tank), at least on the back of the model which was a bit of a mess and required a bit of cleaning. I'm sure the paint job can hide a lot of the problems here, but the back isn't anywhere near as pretty as the front. It's odd, because from literally every other angle the model is gorgeous IMO! It's a Metal Slug that plays murder-soccer, I was 100% sold the second I saw the render.



The other odd thing was that on the large bases the texture for the base looks kind of ill defined.  There are also a good amount of mold lines to clean up which I'll have to go back and do. It's not the end of the world and nothing I can't fix up with paint and basing materials, it's just a little disappointing.



On the plus side the guild comes with a tiny little tank-ball. ITS ADORABLE! 



Final thoughts

Based on a podcast interview with Double Dodge, CEO Matt Hart spoke about the problems they've been having with production and the desire to just hit a release date and knock a release out of the park, no delays.

On that front, they nailed it. While the captains did fluctuate in and out of stock, I was able to order both Yukai and the Miners guild on March 1st. It did take a week for SFG to get it out the door, shipped March 8th, but then the package made it from the UK to New Jersey by Tuesday, March 12th.  I understand I was in the second wave which is what caused my delay. If you had ordered as soon as the models went up on pre-order they were arriving very promptly. 

The cost on the captains isn't bad. $15 for a single blister is pretty much standard and I didn't think twice before ordering.  The Miners box was a bit more of a stretch. It was $80 for the resins plus terrain and (tank!) ball.  Given that the old 6 player guild boxes used to retail for $75 and were metal models, I have to wonder if it wouldn't have been better to just do the Miners in metal if they had to meet production. 

So while they're able to hit the date, the models are in what feels to be a worse material than the PVC, require assembly, can feel flimsy, and are as expensive as metal without being metal. 

On the other side the model designs are great, the rules are nice, and the problems aren't anything an experienced modeler/painter can't solve.  If I had seen what I'd get before I ordered I'd still have bought them all over again, though it does feel a bit steep for what you're getting. In the end, I like the game and company enough that I was going to buy the models. 

I would definitely be hesitant to tell my newer to the hobby friends or less hobby inclined friends to order the Miners guild vs. any of the PVC boxes which are amazing in both value and quality in contrast.

In the end, I do hope SFG is able to sort out their production issues since I think their PVC products are excellent when it comes to minor or new guilds. 

Conversely, if this is how we get single blisters of new captains or models, I am 100% behind the approach if it's what allows the releases to work economically for SFG. Having seen/held Veteran Boar, the models is much more solid than Yukai, but it's also just a bigger model so it's easier to execute I suppose. 

Apple Might Finally Let You Pick Chrome Over Safari In iOS 14 - PCWorld

Apple might finally let you pick Chrome over Safari in iOS 14

2020年2月20日木曜日

So Far Behind...

   I haven't talked about so many gaming things happening in my life the last few weeks.

  First, I went to DesotoCon, in Kansas, back at the end of July. I started a blog post about it and will finish it, I promise. It's even going to be back dated so it will appear before this one. Not many (any?) photos from it though. Well, a few, I think.

  The next week I went to Indianapolis for GenCon. Met a lot of great people, hung out with some friends from Thread Raiders, Saving Throw Show, and Dragons and Things (best Pathfinder liveplay stream, Fridays at 6:00 Pacific on Twitch). Bought a bunch of stuff. Again, it deserves it's own post and I will work on that. A few more photos there.

   I've also released the first product from Goblyn Head Press on DriveThruRPG. It's a supplement designed for D&D 5e called Sacred Sites. It was written by Eli Arndt, who you ugys have seen me mention before around here. Nine different places you can encounter the sacred or profane. It has sold a few copies already and it's only been up about a week. Very excited about that. Probably deserves it's own post, too.


  And we've gotten a few more sessions of Starfinder in. Kicked one guy out of our group, got another new player. Still sitting at three players so if anyone wants to join us in Santa Fe, TX (in Galveston County, on the mainland)...

   And I painted a few minis. Not much. I really need to get to work on the Pledge or I am screwed.

   Oh, two new display cases came in and I got one put together. Detolf from IKEA.


   And I have been drawing more maps on my Wacom tablet. So that's getting me closer to done with another Goblyn Head project.

   All in all, I guess I have been busy. Just not very good at reporting. I'll try to get caught up on all of that the next few days.

Castle Of Illusion Starring Mickey Mouse (PC)

Castle of Illusion Starring Mickey Mouse Remake Title Screen PC
Developer:Sega Studios Australia|Release Date:2013|Systems:Win, PS3, Xbox 360, iOS, Windows Phone, Android, OS X

This week on Super Adventures, I'm playing Castle of Illusion! Again!

I didn't mean to, not originally. I just wanted to grab a couple of screenshots for my article about the Mega Drive game, to show what the remake looked like by comparison. But it turns out that they've remade a lot more than just the graphics, so I decided to give it its own article instead.

I've had this one lying around my in my Steam library unplayed for three years now, ever since they cunningly manipulated me into buying it by announcing it was going to be taken off the store. Sure it was almost certainly going to be put back on eventually, but what if it wasn't? I could've missed my chance to ever play the game! (It came back seven months later).

This Castle of Illusion first came out in 2013, 23 years after the original (and 6 years before now) and it was the last game to be made by Sega Studios Australia. They'd been around for about 10 years by that point and had been known as Creative Assembly Australia for most of it, developing games like Medieval II: Total War and London 2012 - the officially licensed game of the 2012 Olympic Games. Not a whole lot of platformers though, unless you count a port of the 2D Sonic games to the DS, so that's not massively encouraging. But hey the other Creative Assembly came out with Alien: Isolation out of nowhere and everyone loves that except me, so maybe this is actually really good!

Read on »

2020年2月14日金曜日

Brave Browser the Best privacy-focused Browser of 2020



Out of all the privacy-focused products and apps available on the market, Brave has been voted the best. Other winners of Product Hunt's Golden Kitty awards showed that there was a huge interest in privacy-enhancing products and apps such as chats, maps, and other collaboration tools.

An extremely productive year for Brave

Last year has been a pivotal one for the crypto industry, but few companies managed to see the kind of success Brave did. Almost every day of the year has been packed witch action, as the company managed to officially launch its browser, get its Basic Attention Token out, and onboard hundreds of thousands of verified publishers on its rewards platform.

Luckily, the effort Brave has been putting into its product hasn't gone unnoticed.

The company's revolutionary browser has been voted the best privacy-focused product of 2019, for which it received a Golden Kitty award. The awards, hosted by Product Hunt, were given to the most popular products across 23 different product categories.

Ryan Hoover, the founder of Product Hunt said:

"Our annual Golden Kitty awards celebrate all the great products that makers have launched throughout the year"

Brave's win is important for the company—with this year seeing the most user votes ever, it's a clear indicator of the browser's rapidly rising popularity.

Privacy and blockchain are the strongest forces in tech right now

If reaching 10 million monthly active users in December was Brave's crown achievement, then the Product Hunt award was the cherry on top.

The recognition Brave got from Product Hunt users shows that a market for privacy-focused apps is thriving. All of the apps and products that got a Golden Kitty award from Product Hunt users focused heavily on data protection. Everything from automatic investment apps and remote collaboration tools to smart home products emphasized their privacy.

AI and machine learning rose as another note-worthy trend, but blockchain seemed to be the most dominating force in app development. Blockchain-based messaging apps and maps were hugely popular with Product Hunt users, who seem to value innovation and security.

For those users, Brave is a perfect platform. The company's research and development team has recently debuted its privacy-preserving distributed VPN, which could potentially bring even more security to the user than its already existing Tor extension.

Brave's effort to revolutionize the advertising industry has also been recognized by some of the biggest names in publishing—major publications such as The Washington Post, The Guardian, NDTV, NPR, and Qz have all joined the platform. Some of the highest-ranking websites in the world, including Wikipedia, WikiHow, Vimeo, Internet Archive, and DuckDuckGo, are also among Brave's 390,000 verified publishers.

Earn Basic Attention Token (BAT) with Brave Web Browser

Try Brave Browser

Get $5 in free BAT to donate to the websites of your choice.